I0814 10:04:24.082217 6 e2e.go:243] Starting e2e run "3a05f229-9ce6-45f5-8825-089b8b271804" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597399462 - Will randomize all specs Will run 215 of 4413 specs Aug 14 10:04:24.287: INFO: >>> kubeConfig: /root/.kube/config Aug 14 10:04:24.292: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 14 10:04:24.434: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 14 10:04:24.623: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 14 10:04:24.623: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 14 10:04:24.623: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 14 10:04:24.640: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 14 10:04:24.640: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 14 10:04:24.640: INFO: e2e test version: v1.15.12 Aug 14 10:04:24.641: INFO: kube-apiserver version: v1.15.12 SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:04:24.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch Aug 14 10:04:25.329: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 14 10:04:27.606: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1200,SelfLink:/api/v1/namespaces/watch-1200/configmaps/e2e-watch-test-label-changed,UID:5d245669-bec5-4694-baf3-637c206918b8,ResourceVersion:4859998,Generation:0,CreationTimestamp:2020-08-14 10:04:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 14 10:04:27.606: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1200,SelfLink:/api/v1/namespaces/watch-1200/configmaps/e2e-watch-test-label-changed,UID:5d245669-bec5-4694-baf3-637c206918b8,ResourceVersion:4860000,Generation:0,CreationTimestamp:2020-08-14 10:04:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 14 10:04:27.606: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1200,SelfLink:/api/v1/namespaces/watch-1200/configmaps/e2e-watch-test-label-changed,UID:5d245669-bec5-4694-baf3-637c206918b8,ResourceVersion:4860002,Generation:0,CreationTimestamp:2020-08-14 10:04:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 14 10:04:38.587: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1200,SelfLink:/api/v1/namespaces/watch-1200/configmaps/e2e-watch-test-label-changed,UID:5d245669-bec5-4694-baf3-637c206918b8,ResourceVersion:4860025,Generation:0,CreationTimestamp:2020-08-14 10:04:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 14 10:04:38.588: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1200,SelfLink:/api/v1/namespaces/watch-1200/configmaps/e2e-watch-test-label-changed,UID:5d245669-bec5-4694-baf3-637c206918b8,ResourceVersion:4860027,Generation:0,CreationTimestamp:2020-08-14 10:04:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Aug 14 10:04:38.588: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1200,SelfLink:/api/v1/namespaces/watch-1200/configmaps/e2e-watch-test-label-changed,UID:5d245669-bec5-4694-baf3-637c206918b8,ResourceVersion:4860029,Generation:0,CreationTimestamp:2020-08-14 10:04:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:04:38.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1200" for this suite. Aug 14 10:04:44.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:04:45.056: INFO: namespace watch-1200 deletion completed in 6.287273631s • [SLOW TEST:20.415 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:04:45.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 14 10:04:45.918: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 14 10:04:50.922: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 14 10:04:54.929: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 14 10:04:56.954: INFO: Creating deployment "test-rollover-deployment" Aug 14 10:04:56.960: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 14 10:04:58.965: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 14 10:04:58.972: INFO: Ensure that both replica sets have 1 created replica Aug 14 10:04:58.977: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 14 10:04:58.984: INFO: Updating deployment test-rollover-deployment Aug 14 10:04:58.984: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 14 10:05:01.715: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 14 10:05:01.746: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 14 10:05:01.794: INFO: all replica sets need to contain the pod-template-hash label Aug 14 10:05:01.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996301, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996296, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:05:04.798: INFO: all replica sets need to contain the pod-template-hash label Aug 14 10:05:04.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996301, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996296, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:05:05.896: INFO: all replica sets need to contain the pod-template-hash label Aug 14 10:05:05.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996301, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996296, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:05:08.399: INFO: all replica sets need to contain the pod-template-hash label Aug 14 10:05:08.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996301, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996296, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:05:09.902: INFO: all replica sets need to contain the pod-template-hash label Aug 14 10:05:09.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996301, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996296, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:05:11.917: INFO: all replica sets need to contain the pod-template-hash label Aug 14 10:05:11.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996310, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996296, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:05:14.240: INFO: all replica sets need to contain the pod-template-hash label Aug 14 10:05:14.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996310, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996296, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:05:15.996: INFO: all replica sets need to contain the pod-template-hash label Aug 14 10:05:15.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996310, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996296, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:05:17.803: INFO: all replica sets need to contain the pod-template-hash label Aug 14 10:05:17.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996310, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996296, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:05:20.033: INFO: all replica sets need to contain the pod-template-hash label Aug 14 10:05:20.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996310, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996296, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:05:23.071: INFO: Aug 14 10:05:23.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996297, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996321, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996296, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:05:24.061: INFO: Aug 14 10:05:24.061: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 14 10:05:24.067: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7234,SelfLink:/apis/apps/v1/namespaces/deployment-7234/deployments/test-rollover-deployment,UID:b3a7b937-f007-4f61-8f85-b52a68d09fcf,ResourceVersion:4860223,Generation:2,CreationTimestamp:2020-08-14 10:04:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-14 10:04:57 +0000 UTC 2020-08-14 10:04:57 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-14 10:05:23 +0000 UTC 2020-08-14 10:04:56 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 14 10:05:24.070: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7234,SelfLink:/apis/apps/v1/namespaces/deployment-7234/replicasets/test-rollover-deployment-854595fc44,UID:abc41480-76f0-4f9b-a58a-3873d55ef46c,ResourceVersion:4860209,Generation:2,CreationTimestamp:2020-08-14 10:04:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b3a7b937-f007-4f61-8f85-b52a68d09fcf 0xc0026eb887 0xc0026eb888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 14 10:05:24.070: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 14 10:05:24.070: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7234,SelfLink:/apis/apps/v1/namespaces/deployment-7234/replicasets/test-rollover-controller,UID:1b2f91d8-d871-40db-86c2-3192417b9089,ResourceVersion:4860222,Generation:2,CreationTimestamp:2020-08-14 10:04:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b3a7b937-f007-4f61-8f85-b52a68d09fcf 0xc0026eb7b7 0xc0026eb7b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 14 10:05:24.070: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7234,SelfLink:/apis/apps/v1/namespaces/deployment-7234/replicasets/test-rollover-deployment-9b8b997cf,UID:2a14cc59-57e5-4d2c-9fb3-3093ac85b49c,ResourceVersion:4860133,Generation:2,CreationTimestamp:2020-08-14 10:04:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b3a7b937-f007-4f61-8f85-b52a68d09fcf 0xc0026eb950 0xc0026eb951}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 14 10:05:24.074: INFO: Pod "test-rollover-deployment-854595fc44-rf2gc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-rf2gc,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7234,SelfLink:/api/v1/namespaces/deployment-7234/pods/test-rollover-deployment-854595fc44-rf2gc,UID:ef597d29-99e6-4854-a242-f46fe4f725ab,ResourceVersion:4860158,Generation:0,CreationTimestamp:2020-08-14 10:05:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 abc41480-76f0-4f9b-a58a-3873d55ef46c 0xc0025be517 0xc0025be518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrs2j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrs2j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-nrs2j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025be590} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025be5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:05:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:05:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:05:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:05:00 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.153,StartTime:2020-08-14 10:05:01 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-14 10:05:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://4a05a13fcafc8eecaffe0e4d74b084445b0651a81019fc415a3a3ea74525b24e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:05:24.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7234" for this suite. Aug 14 10:05:43.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:05:43.573: INFO: namespace deployment-7234 deletion completed in 19.496185996s • [SLOW TEST:58.517 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:05:43.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Aug 14 10:05:50.799: INFO: Pod pod-hostip-6020695a-4c8b-4867-ba2c-eb0ad1628e41 has hostIP: 172.18.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:05:50.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8457" for this suite. Aug 14 10:06:19.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:06:19.798: INFO: namespace pods-8457 deletion completed in 28.995122371s • [SLOW TEST:36.224 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:06:19.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 14 10:06:22.048: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:06:41.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8659" for this suite. Aug 14 10:07:38.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:07:38.764: INFO: namespace pods-8659 deletion completed in 57.283856884s • [SLOW TEST:78.967 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:07:38.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 14 10:07:39.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8015' Aug 14 10:08:02.232: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 14 10:08:02.232: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Aug 14 10:08:04.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8015' Aug 14 10:08:05.143: INFO: stderr: "" Aug 14 10:08:05.143: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:08:05.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8015" for this suite. Aug 14 10:08:31.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:08:31.520: INFO: namespace kubectl-8015 deletion completed in 26.373163251s • [SLOW TEST:52.755 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:08:31.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 14 10:08:31.674: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 14 10:08:36.678: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:08:38.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1115" for this suite. Aug 14 10:08:54.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:08:54.713: INFO: namespace replication-controller-1115 deletion completed in 16.105026796s • [SLOW TEST:23.192 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:08:54.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-a6c6d34e-5676-4e3a-899f-f63d4497195b STEP: Creating a pod to test consume secrets Aug 14 10:08:54.963: INFO: Waiting up to 5m0s for pod "pod-secrets-fedde856-5e69-4d4b-95df-02b49373e134" in namespace "secrets-8521" to be "success or failure" Aug 14 10:08:54.970: INFO: Pod "pod-secrets-fedde856-5e69-4d4b-95df-02b49373e134": Phase="Pending", Reason="", readiness=false. Elapsed: 7.157903ms Aug 14 10:08:57.091: INFO: Pod "pod-secrets-fedde856-5e69-4d4b-95df-02b49373e134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127242216s Aug 14 10:08:59.139: INFO: Pod "pod-secrets-fedde856-5e69-4d4b-95df-02b49373e134": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175872103s Aug 14 10:09:01.383: INFO: Pod "pod-secrets-fedde856-5e69-4d4b-95df-02b49373e134": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420117107s Aug 14 10:09:03.387: INFO: Pod "pod-secrets-fedde856-5e69-4d4b-95df-02b49373e134": Phase="Running", Reason="", readiness=true. Elapsed: 8.423730202s Aug 14 10:09:05.392: INFO: Pod "pod-secrets-fedde856-5e69-4d4b-95df-02b49373e134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.428724112s STEP: Saw pod success Aug 14 10:09:05.392: INFO: Pod "pod-secrets-fedde856-5e69-4d4b-95df-02b49373e134" satisfied condition "success or failure" Aug 14 10:09:05.395: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-fedde856-5e69-4d4b-95df-02b49373e134 container secret-volume-test: STEP: delete the pod Aug 14 10:09:05.463: INFO: Waiting for pod pod-secrets-fedde856-5e69-4d4b-95df-02b49373e134 to disappear Aug 14 10:09:05.468: INFO: Pod pod-secrets-fedde856-5e69-4d4b-95df-02b49373e134 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:09:05.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8521" for this suite. Aug 14 10:09:15.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:09:16.245: INFO: namespace secrets-8521 deletion completed in 10.774664532s • [SLOW TEST:21.532 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:09:16.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 14 10:09:48.499: INFO: Container started at 2020-08-14 10:09:30 +0000 UTC, pod became ready at 2020-08-14 10:09:48 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:09:48.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2077" for this suite. Aug 14 10:10:12.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:10:13.245: INFO: namespace container-probe-2077 deletion completed in 24.742620681s • [SLOW TEST:57.000 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:10:13.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-2a7ec317-af88-4c37-9dab-8fdb0460c3be in namespace container-probe-7772 Aug 14 10:10:25.289: INFO: Started pod liveness-2a7ec317-af88-4c37-9dab-8fdb0460c3be in namespace container-probe-7772 STEP: checking the pod's current state and verifying that restartCount is present Aug 14 10:10:25.291: INFO: Initial restart count of pod liveness-2a7ec317-af88-4c37-9dab-8fdb0460c3be is 0 Aug 14 10:10:47.582: INFO: Restart count of pod container-probe-7772/liveness-2a7ec317-af88-4c37-9dab-8fdb0460c3be is now 1 (22.291097847s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:10:48.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7772" for this suite. Aug 14 10:11:01.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:11:01.497: INFO: namespace container-probe-7772 deletion completed in 12.698629422s • [SLOW TEST:48.251 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:11:01.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 14 10:11:02.137: INFO: Creating ReplicaSet my-hostname-basic-2a06eb36-92f5-45fd-bd2c-ba2264d36ccc Aug 14 10:11:02.312: INFO: Pod name my-hostname-basic-2a06eb36-92f5-45fd-bd2c-ba2264d36ccc: Found 0 pods out of 1 Aug 14 10:11:07.445: INFO: Pod name my-hostname-basic-2a06eb36-92f5-45fd-bd2c-ba2264d36ccc: Found 1 pods out of 1 Aug 14 10:11:07.445: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2a06eb36-92f5-45fd-bd2c-ba2264d36ccc" is running Aug 14 10:11:11.450: INFO: Pod "my-hostname-basic-2a06eb36-92f5-45fd-bd2c-ba2264d36ccc-cx9ph" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 10:11:02 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 10:11:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2a06eb36-92f5-45fd-bd2c-ba2264d36ccc]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 10:11:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2a06eb36-92f5-45fd-bd2c-ba2264d36ccc]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 10:11:02 +0000 UTC Reason: Message:}]) Aug 14 10:11:11.450: INFO: Trying to dial the pod Aug 14 10:11:16.465: INFO: Controller my-hostname-basic-2a06eb36-92f5-45fd-bd2c-ba2264d36ccc: Got expected result from replica 1 [my-hostname-basic-2a06eb36-92f5-45fd-bd2c-ba2264d36ccc-cx9ph]: "my-hostname-basic-2a06eb36-92f5-45fd-bd2c-ba2264d36ccc-cx9ph", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:11:16.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3768" for this suite. Aug 14 10:11:24.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:11:24.565: INFO: namespace replicaset-3768 deletion completed in 8.095514546s • [SLOW TEST:23.068 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:11:24.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 14 10:11:24.690: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 14 10:11:24.711: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 14 10:11:29.756: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 14 10:11:34.047: INFO: Creating deployment "test-rolling-update-deployment" Aug 14 10:11:34.659: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 14 10:11:35.457: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 14 10:11:37.930: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 14 10:11:38.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996695, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:11:40.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996695, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:11:42.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996695, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:11:44.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996696, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732996695, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 10:11:46.352: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 14 10:11:46.390: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7519,SelfLink:/apis/apps/v1/namespaces/deployment-7519/deployments/test-rolling-update-deployment,UID:3f09de90-c090-4e59-803a-4225b348b19e,ResourceVersion:4861692,Generation:1,CreationTimestamp:2020-08-14 10:11:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-14 10:11:36 +0000 UTC 2020-08-14 10:11:36 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-14 10:11:45 +0000 UTC 2020-08-14 10:11:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 14 10:11:46.393: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-7519,SelfLink:/apis/apps/v1/namespaces/deployment-7519/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:20170b0b-43a1-4d82-9e61-d0d6ec4ff287,ResourceVersion:4861680,Generation:1,CreationTimestamp:2020-08-14 10:11:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 3f09de90-c090-4e59-803a-4225b348b19e 0xc0028b26e7 0xc0028b26e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 14 10:11:46.393: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 14 10:11:46.393: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7519,SelfLink:/apis/apps/v1/namespaces/deployment-7519/replicasets/test-rolling-update-controller,UID:15eab1c0-e3c1-4ca9-ac4a-27dc7a258c43,ResourceVersion:4861690,Generation:2,CreationTimestamp:2020-08-14 10:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 3f09de90-c090-4e59-803a-4225b348b19e 0xc0028b2617 0xc0028b2618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 14 10:11:46.395: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-l2274" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-l2274,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-7519,SelfLink:/api/v1/namespaces/deployment-7519/pods/test-rolling-update-deployment-79f6b9d75c-l2274,UID:a869794c-72a0-41aa-90ee-ddaab09cb2d2,ResourceVersion:4861679,Generation:0,CreationTimestamp:2020-08-14 10:11:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 20170b0b-43a1-4d82-9e61-d0d6ec4ff287 0xc0028b2fb7 0xc0028b2fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-56bdm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56bdm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-56bdm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028b3030} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028b3050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:11:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:11:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:11:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:11:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.162,StartTime:2020-08-14 10:11:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-14 10:11:44 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://096e9b5fb2e706d6fdfd696a7e33a3d91dd13dc09d373cf0ec3fefd91e738614}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:11:46.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7519" for this suite. Aug 14 10:11:56.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:11:56.508: INFO: namespace deployment-7519 deletion completed in 10.11005335s • [SLOW TEST:31.944 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:11:56.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2652, will wait for the garbage collector to delete the pods Aug 14 10:12:06.662: INFO: Deleting Job.batch foo took: 6.551691ms Aug 14 10:12:06.962: INFO: Terminating Job.batch foo pods took: 300.26867ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:12:48.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2652" for this suite. Aug 14 10:12:57.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:12:57.202: INFO: namespace job-2652 deletion completed in 8.733171881s • [SLOW TEST:60.692 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:12:57.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ac1aea62-ccdc-4eea-9894-baced8ae3399 STEP: Creating a pod to test consume secrets Aug 14 10:12:57.620: INFO: Waiting up to 5m0s for pod "pod-secrets-065fa26d-1fc4-49fd-9ac8-34cbae121e39" in namespace "secrets-3745" to be "success or failure" Aug 14 10:12:57.675: INFO: Pod "pod-secrets-065fa26d-1fc4-49fd-9ac8-34cbae121e39": Phase="Pending", Reason="", readiness=false. Elapsed: 55.216454ms Aug 14 10:12:59.716: INFO: Pod "pod-secrets-065fa26d-1fc4-49fd-9ac8-34cbae121e39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09582454s Aug 14 10:13:01.719: INFO: Pod "pod-secrets-065fa26d-1fc4-49fd-9ac8-34cbae121e39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099395728s STEP: Saw pod success Aug 14 10:13:01.720: INFO: Pod "pod-secrets-065fa26d-1fc4-49fd-9ac8-34cbae121e39" satisfied condition "success or failure" Aug 14 10:13:01.722: INFO: Trying to get logs from node iruya-worker pod pod-secrets-065fa26d-1fc4-49fd-9ac8-34cbae121e39 container secret-volume-test: STEP: delete the pod Aug 14 10:13:01.790: INFO: Waiting for pod pod-secrets-065fa26d-1fc4-49fd-9ac8-34cbae121e39 to disappear Aug 14 10:13:01.802: INFO: Pod pod-secrets-065fa26d-1fc4-49fd-9ac8-34cbae121e39 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:13:01.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3745" for this suite. Aug 14 10:13:09.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:13:09.909: INFO: namespace secrets-3745 deletion completed in 8.104429445s • [SLOW TEST:12.707 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:13:09.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 14 10:13:22.833: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:13:23.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8432" for this suite. Aug 14 10:13:31.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:13:32.027: INFO: namespace container-runtime-8432 deletion completed in 8.631566738s • [SLOW TEST:22.117 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:13:32.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-9a50db6c-11fa-4a53-be99-a9654f7d9a79 Aug 14 10:13:33.701: INFO: Pod name my-hostname-basic-9a50db6c-11fa-4a53-be99-a9654f7d9a79: Found 0 pods out of 1 Aug 14 10:13:39.286: INFO: Pod name my-hostname-basic-9a50db6c-11fa-4a53-be99-a9654f7d9a79: Found 1 pods out of 1 Aug 14 10:13:39.286: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9a50db6c-11fa-4a53-be99-a9654f7d9a79" are running Aug 14 10:13:46.453: INFO: Pod "my-hostname-basic-9a50db6c-11fa-4a53-be99-a9654f7d9a79-qj26x" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 10:13:35 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 10:13:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9a50db6c-11fa-4a53-be99-a9654f7d9a79]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 10:13:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9a50db6c-11fa-4a53-be99-a9654f7d9a79]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 10:13:33 +0000 UTC Reason: Message:}]) Aug 14 10:13:46.453: INFO: Trying to dial the pod Aug 14 10:13:53.083: INFO: Controller my-hostname-basic-9a50db6c-11fa-4a53-be99-a9654f7d9a79: Got expected result from replica 1 [my-hostname-basic-9a50db6c-11fa-4a53-be99-a9654f7d9a79-qj26x]: "my-hostname-basic-9a50db6c-11fa-4a53-be99-a9654f7d9a79-qj26x", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:13:53.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6283" for this suite. Aug 14 10:14:11.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:14:13.658: INFO: namespace replication-controller-6283 deletion completed in 20.364816464s • [SLOW TEST:41.630 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:14:13.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 14 10:14:15.724: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 14 10:14:16.468: INFO: Number of nodes with available pods: 0 Aug 14 10:14:16.468: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 14 10:14:19.454: INFO: Number of nodes with available pods: 0 Aug 14 10:14:19.454: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:20.904: INFO: Number of nodes with available pods: 0 Aug 14 10:14:20.904: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:21.556: INFO: Number of nodes with available pods: 0 Aug 14 10:14:21.556: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:23.718: INFO: Number of nodes with available pods: 0 Aug 14 10:14:23.718: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:24.458: INFO: Number of nodes with available pods: 0 Aug 14 10:14:24.458: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:25.500: INFO: Number of nodes with available pods: 0 Aug 14 10:14:25.500: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:26.458: INFO: Number of nodes with available pods: 1 Aug 14 10:14:26.458: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 14 10:14:26.502: INFO: Number of nodes with available pods: 1 Aug 14 10:14:26.502: INFO: Number of running nodes: 0, number of available pods: 1 Aug 14 10:14:27.505: INFO: Number of nodes with available pods: 0 Aug 14 10:14:27.505: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 14 10:14:27.579: INFO: Number of nodes with available pods: 0 Aug 14 10:14:27.579: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:28.583: INFO: Number of nodes with available pods: 0 Aug 14 10:14:28.583: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:29.583: INFO: Number of nodes with available pods: 0 Aug 14 10:14:29.583: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:30.582: INFO: Number of nodes with available pods: 0 Aug 14 10:14:30.582: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:31.582: INFO: Number of nodes with available pods: 0 Aug 14 10:14:31.582: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:32.583: INFO: Number of nodes with available pods: 0 Aug 14 10:14:32.583: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:33.584: INFO: Number of nodes with available pods: 0 Aug 14 10:14:33.584: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:34.583: INFO: Number of nodes with available pods: 0 Aug 14 10:14:34.583: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:35.963: INFO: Number of nodes with available pods: 0 Aug 14 10:14:35.963: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:36.676: INFO: Number of nodes with available pods: 0 Aug 14 10:14:36.676: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:37.627: INFO: Number of nodes with available pods: 0 Aug 14 10:14:37.627: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:38.584: INFO: Number of nodes with available pods: 0 Aug 14 10:14:38.584: INFO: Node iruya-worker is running more than one daemon pod Aug 14 10:14:39.583: INFO: Number of nodes with available pods: 1 Aug 14 10:14:39.583: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5246, will wait for the garbage collector to delete the pods Aug 14 10:14:39.646: INFO: Deleting DaemonSet.extensions daemon-set took: 5.279132ms Aug 14 10:14:39.946: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.286816ms Aug 14 10:14:46.095: INFO: Number of nodes with available pods: 0 Aug 14 10:14:46.095: INFO: Number of running nodes: 0, number of available pods: 0 Aug 14 10:14:46.101: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5246/daemonsets","resourceVersion":"4862588"},"items":null} Aug 14 10:14:46.305: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5246/pods","resourceVersion":"4862589"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:14:46.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5246" for this suite. Aug 14 10:15:00.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:15:00.737: INFO: namespace daemonsets-5246 deletion completed in 14.112096651s • [SLOW TEST:47.079 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:15:00.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 14 10:15:15.689: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 10:15:15.749: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 10:15:17.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 10:15:17.753: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 10:15:19.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 10:15:19.752: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 10:15:21.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 10:15:21.892: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 10:15:23.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 10:15:23.850: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 10:15:25.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 10:15:25.752: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 10:15:27.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 10:15:27.761: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 10:15:29.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 10:15:29.754: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 10:15:31.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 10:15:31.753: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 10:15:33.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 10:15:33.753: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:15:33.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9684" for this suite. Aug 14 10:16:01.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:16:02.094: INFO: namespace container-lifecycle-hook-9684 deletion completed in 28.33620252s • [SLOW TEST:61.356 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:16:02.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-f591ea17-6682-4c56-aa6b-65b7623c7d94 STEP: Creating a pod to test consume configMaps Aug 14 10:16:02.810: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4" in namespace "projected-542" to be "success or failure" Aug 14 10:16:03.679: INFO: Pod "pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 868.609613ms Aug 14 10:16:05.796: INFO: Pod "pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.986144795s Aug 14 10:16:08.569: INFO: Pod "pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.758524425s Aug 14 10:16:10.571: INFO: Pod "pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.761255345s Aug 14 10:16:12.574: INFO: Pod "pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.763954866s Aug 14 10:16:14.601: INFO: Pod "pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4": Phase="Running", Reason="", readiness=true. Elapsed: 11.790718299s Aug 14 10:16:16.605: INFO: Pod "pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.794731179s STEP: Saw pod success Aug 14 10:16:16.605: INFO: Pod "pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4" satisfied condition "success or failure" Aug 14 10:16:16.608: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4 container projected-configmap-volume-test: STEP: delete the pod Aug 14 10:16:16.775: INFO: Waiting for pod pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4 to disappear Aug 14 10:16:16.810: INFO: Pod pod-projected-configmaps-fb45fcdb-01a9-4926-83a9-fadffc4d8ec4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:16:16.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-542" for this suite. Aug 14 10:16:22.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:16:22.913: INFO: namespace projected-542 deletion completed in 6.098329852s • [SLOW TEST:20.819 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:16:22.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-4xdh STEP: Creating a pod to test atomic-volume-subpath Aug 14 10:16:23.170: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4xdh" in namespace "subpath-2776" to be "success or failure" Aug 14 10:16:23.264: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Pending", Reason="", readiness=false. Elapsed: 93.857015ms Aug 14 10:16:25.300: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129512098s Aug 14 10:16:27.935: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.764808738s Aug 14 10:16:29.971: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.800503174s Aug 14 10:16:32.036: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Running", Reason="", readiness=true. Elapsed: 8.866337222s Aug 14 10:16:34.445: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Running", Reason="", readiness=true. Elapsed: 11.27437843s Aug 14 10:16:36.449: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Running", Reason="", readiness=true. Elapsed: 13.278672088s Aug 14 10:16:38.515: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Running", Reason="", readiness=true. Elapsed: 15.345311986s Aug 14 10:16:40.519: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Running", Reason="", readiness=true. Elapsed: 17.348859386s Aug 14 10:16:42.557: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Running", Reason="", readiness=true. Elapsed: 19.387231139s Aug 14 10:16:45.253: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Running", Reason="", readiness=true. Elapsed: 22.083134697s Aug 14 10:16:47.474: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Running", Reason="", readiness=true. Elapsed: 24.303949948s Aug 14 10:16:49.516: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Running", Reason="", readiness=true. Elapsed: 26.345676423s Aug 14 10:16:51.895: INFO: Pod "pod-subpath-test-secret-4xdh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.724767204s STEP: Saw pod success Aug 14 10:16:51.895: INFO: Pod "pod-subpath-test-secret-4xdh" satisfied condition "success or failure" Aug 14 10:16:52.163: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-4xdh container test-container-subpath-secret-4xdh: STEP: delete the pod Aug 14 10:16:52.202: INFO: Waiting for pod pod-subpath-test-secret-4xdh to disappear Aug 14 10:16:52.243: INFO: Pod pod-subpath-test-secret-4xdh no longer exists STEP: Deleting pod pod-subpath-test-secret-4xdh Aug 14 10:16:52.243: INFO: Deleting pod "pod-subpath-test-secret-4xdh" in namespace "subpath-2776" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:16:52.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2776" for this suite. Aug 14 10:17:03.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:17:03.277: INFO: namespace subpath-2776 deletion completed in 11.02805274s • [SLOW TEST:40.363 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:17:03.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-132ad9b6-ddd9-49d0-90fd-508caed12b29 STEP: Creating a pod to test consume secrets Aug 14 10:17:03.870: INFO: Waiting up to 5m0s for pod "pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d" in namespace "secrets-9133" to be "success or failure" Aug 14 10:17:03.938: INFO: Pod "pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d": Phase="Pending", Reason="", readiness=false. Elapsed: 68.312931ms Aug 14 10:17:06.061: INFO: Pod "pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191675472s Aug 14 10:17:08.545: INFO: Pod "pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.675723532s Aug 14 10:17:10.549: INFO: Pod "pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.679454475s Aug 14 10:17:13.038: INFO: Pod "pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.168114597s Aug 14 10:17:15.042: INFO: Pod "pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.172625989s Aug 14 10:17:17.415: INFO: Pod "pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d": Phase="Running", Reason="", readiness=true. Elapsed: 13.545150581s Aug 14 10:17:19.418: INFO: Pod "pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.548397894s STEP: Saw pod success Aug 14 10:17:19.418: INFO: Pod "pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d" satisfied condition "success or failure" Aug 14 10:17:19.420: INFO: Trying to get logs from node iruya-worker pod pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d container secret-volume-test: STEP: delete the pod Aug 14 10:17:19.768: INFO: Waiting for pod pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d to disappear Aug 14 10:17:20.055: INFO: Pod pod-secrets-9c2f8b3e-34aa-429d-bfe3-98e66650e62d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:17:20.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9133" for this suite. Aug 14 10:17:30.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:17:30.817: INFO: namespace secrets-9133 deletion completed in 10.758703615s • [SLOW TEST:27.540 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:17:30.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 14 10:17:39.679: INFO: 10 pods remaining Aug 14 10:17:39.679: INFO: 10 pods has nil DeletionTimestamp Aug 14 10:17:39.679: INFO: Aug 14 10:17:44.016: INFO: 1 pods remaining Aug 14 10:17:44.016: INFO: 0 pods has nil DeletionTimestamp Aug 14 10:17:44.016: INFO: Aug 14 10:17:45.667: INFO: 0 pods remaining Aug 14 10:17:45.667: INFO: 0 pods has nil DeletionTimestamp Aug 14 10:17:45.667: INFO: STEP: Gathering metrics W0814 10:17:47.608958 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 14 10:17:47.609: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:17:47.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7327" for this suite. Aug 14 10:18:02.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:18:03.245: INFO: namespace gc-7327 deletion completed in 15.079862796s • [SLOW TEST:32.427 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:18:03.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-46d49dd7-929c-41b8-b9d6-726a01275788 in namespace container-probe-1687 Aug 14 10:18:15.717: INFO: Started pod busybox-46d49dd7-929c-41b8-b9d6-726a01275788 in namespace container-probe-1687 STEP: checking the pod's current state and verifying that restartCount is present Aug 14 10:18:15.719: INFO: Initial restart count of pod busybox-46d49dd7-929c-41b8-b9d6-726a01275788 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:22:20.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1687" for this suite. Aug 14 10:22:30.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:22:30.857: INFO: namespace container-probe-1687 deletion completed in 10.782653719s • [SLOW TEST:267.612 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:22:30.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0814 10:22:38.204340 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 14 10:22:38.204: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:22:38.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2639" for this suite. Aug 14 10:23:15.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:23:20.398: INFO: namespace gc-2639 deletion completed in 42.18962497s • [SLOW TEST:49.540 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:23:20.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-d62abcc2-c3ad-47e9-9c88-c6bbc31e4c28 STEP: Creating secret with name s-test-opt-upd-95176c09-8b27-42e5-b0c9-f0b00c483fae STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d62abcc2-c3ad-47e9-9c88-c6bbc31e4c28 STEP: Updating secret s-test-opt-upd-95176c09-8b27-42e5-b0c9-f0b00c483fae STEP: Creating secret with name s-test-opt-create-e043d076-99be-4e01-a9d1-ddd70ff8051c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:25:04.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8484" for this suite. Aug 14 10:25:28.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:25:29.047: INFO: namespace projected-8484 deletion completed in 25.031261975s • [SLOW TEST:128.649 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:25:29.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 14 10:25:29.940: INFO: Waiting up to 5m0s for pod "pod-1754866b-3b8a-4ebc-b4ca-0ffaca255a0e" in namespace "emptydir-7676" to be "success or failure" Aug 14 10:25:30.124: INFO: Pod "pod-1754866b-3b8a-4ebc-b4ca-0ffaca255a0e": Phase="Pending", Reason="", readiness=false. Elapsed: 184.753818ms Aug 14 10:25:32.162: INFO: Pod "pod-1754866b-3b8a-4ebc-b4ca-0ffaca255a0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222511908s Aug 14 10:25:34.166: INFO: Pod "pod-1754866b-3b8a-4ebc-b4ca-0ffaca255a0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.226661799s Aug 14 10:25:36.170: INFO: Pod "pod-1754866b-3b8a-4ebc-b4ca-0ffaca255a0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230621173s Aug 14 10:25:38.321: INFO: Pod "pod-1754866b-3b8a-4ebc-b4ca-0ffaca255a0e": Phase="Running", Reason="", readiness=true. Elapsed: 8.381641148s Aug 14 10:25:40.392: INFO: Pod "pod-1754866b-3b8a-4ebc-b4ca-0ffaca255a0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.451864292s STEP: Saw pod success Aug 14 10:25:40.392: INFO: Pod "pod-1754866b-3b8a-4ebc-b4ca-0ffaca255a0e" satisfied condition "success or failure" Aug 14 10:25:40.396: INFO: Trying to get logs from node iruya-worker pod pod-1754866b-3b8a-4ebc-b4ca-0ffaca255a0e container test-container: STEP: delete the pod Aug 14 10:25:40.739: INFO: Waiting for pod pod-1754866b-3b8a-4ebc-b4ca-0ffaca255a0e to disappear Aug 14 10:25:41.007: INFO: Pod pod-1754866b-3b8a-4ebc-b4ca-0ffaca255a0e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:25:41.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7676" for this suite. Aug 14 10:25:49.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:25:49.621: INFO: namespace emptydir-7676 deletion completed in 8.60942073s • [SLOW TEST:20.573 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:25:49.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Aug 14 10:25:49.731: INFO: Waiting up to 5m0s for pod "client-containers-5f691aad-55d1-4121-8092-4372b1f3554e" in namespace "containers-9387" to be "success or failure" Aug 14 10:25:49.743: INFO: Pod "client-containers-5f691aad-55d1-4121-8092-4372b1f3554e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.301595ms Aug 14 10:25:51.747: INFO: Pod "client-containers-5f691aad-55d1-4121-8092-4372b1f3554e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016070273s Aug 14 10:25:53.801: INFO: Pod "client-containers-5f691aad-55d1-4121-8092-4372b1f3554e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069655104s Aug 14 10:25:55.822: INFO: Pod "client-containers-5f691aad-55d1-4121-8092-4372b1f3554e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090828574s Aug 14 10:25:58.232: INFO: Pod "client-containers-5f691aad-55d1-4121-8092-4372b1f3554e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.500766496s STEP: Saw pod success Aug 14 10:25:58.232: INFO: Pod "client-containers-5f691aad-55d1-4121-8092-4372b1f3554e" satisfied condition "success or failure" Aug 14 10:25:58.235: INFO: Trying to get logs from node iruya-worker pod client-containers-5f691aad-55d1-4121-8092-4372b1f3554e container test-container: STEP: delete the pod Aug 14 10:25:58.370: INFO: Waiting for pod client-containers-5f691aad-55d1-4121-8092-4372b1f3554e to disappear Aug 14 10:25:58.382: INFO: Pod client-containers-5f691aad-55d1-4121-8092-4372b1f3554e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:25:58.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9387" for this suite. Aug 14 10:26:04.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:26:04.593: INFO: namespace containers-9387 deletion completed in 6.19572454s • [SLOW TEST:14.972 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:26:04.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 14 10:26:04.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7057' Aug 14 10:26:12.220: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 14 10:26:12.220: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Aug 14 10:26:12.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7057' Aug 14 10:26:12.533: INFO: stderr: "" Aug 14 10:26:12.533: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:26:12.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7057" for this suite. Aug 14 10:26:20.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:26:20.763: INFO: namespace kubectl-7057 deletion completed in 8.203115439s • [SLOW TEST:16.170 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:26:20.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Aug 14 10:26:33.778: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2699 pod-service-account-0fc16d5d-564b-4168-a211-12a928882294 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 14 10:26:34.393: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2699 pod-service-account-0fc16d5d-564b-4168-a211-12a928882294 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 14 10:26:34.820: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2699 pod-service-account-0fc16d5d-564b-4168-a211-12a928882294 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:26:35.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2699" for this suite. Aug 14 10:26:41.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:26:41.125: INFO: namespace svcaccounts-2699 deletion completed in 6.095079147s • [SLOW TEST:20.361 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:26:41.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-3592 I0814 10:26:41.206428 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3592, replica count: 1 I0814 10:26:42.256973 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 10:26:43.257209 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 10:26:44.257426 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 10:26:45.257728 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 10:26:46.257992 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 10:26:47.258204 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 10:26:48.258463 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 10:26:49.258710 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 10:26:50.258927 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 10:26:51.259096 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 10:26:52.259300 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 14 10:26:52.474: INFO: Created: latency-svc-fxb9r Aug 14 10:26:52.477: INFO: Got endpoints: latency-svc-fxb9r [118.26158ms] Aug 14 10:26:52.658: INFO: Created: latency-svc-d4nkg Aug 14 10:26:52.668: INFO: Got endpoints: latency-svc-d4nkg [190.867444ms] Aug 14 10:26:52.721: INFO: Created: latency-svc-xcznl Aug 14 10:26:52.747: INFO: Got endpoints: latency-svc-xcznl [269.100769ms] Aug 14 10:26:52.837: INFO: Created: latency-svc-fm72z Aug 14 10:26:52.841: INFO: Got endpoints: latency-svc-fm72z [362.982658ms] Aug 14 10:26:53.033: INFO: Created: latency-svc-wf5wl Aug 14 10:26:53.075: INFO: Got endpoints: latency-svc-wf5wl [596.633196ms] Aug 14 10:26:53.076: INFO: Created: latency-svc-v599l Aug 14 10:26:53.093: INFO: Got endpoints: latency-svc-v599l [613.345062ms] Aug 14 10:26:53.233: INFO: Created: latency-svc-9wdjh Aug 14 10:26:53.238: INFO: Got endpoints: latency-svc-9wdjh [758.613527ms] Aug 14 10:26:53.298: INFO: Created: latency-svc-pzvsd Aug 14 10:26:53.409: INFO: Got endpoints: latency-svc-pzvsd [930.099235ms] Aug 14 10:26:53.418: INFO: Created: latency-svc-g69nv Aug 14 10:26:53.457: INFO: Got endpoints: latency-svc-g69nv [977.212734ms] Aug 14 10:26:53.496: INFO: Created: latency-svc-k66rl Aug 14 10:26:53.580: INFO: Got endpoints: latency-svc-k66rl [1.100101296s] Aug 14 10:26:53.796: INFO: Created: latency-svc-5p8q6 Aug 14 10:26:53.886: INFO: Got endpoints: latency-svc-5p8q6 [1.406544716s] Aug 14 10:26:53.987: INFO: Created: latency-svc-9lfkd Aug 14 10:26:54.021: INFO: Got endpoints: latency-svc-9lfkd [1.541272867s] Aug 14 10:26:54.210: INFO: Created: latency-svc-mdpmf Aug 14 10:26:54.214: INFO: Got endpoints: latency-svc-mdpmf [1.733551565s] Aug 14 10:26:54.306: INFO: Created: latency-svc-mddjw Aug 14 10:26:55.078: INFO: Got endpoints: latency-svc-mddjw [2.597352935s] Aug 14 10:26:55.633: INFO: Created: latency-svc-thpfw Aug 14 10:26:55.832: INFO: Got endpoints: latency-svc-thpfw [3.351502356s] Aug 14 10:26:55.843: INFO: Created: latency-svc-vnpt7 Aug 14 10:26:55.898: INFO: Got endpoints: latency-svc-vnpt7 [3.417054353s] Aug 14 10:26:56.029: INFO: Created: latency-svc-h5c7w Aug 14 10:26:56.053: INFO: Got endpoints: latency-svc-h5c7w [3.384954064s] Aug 14 10:26:56.174: INFO: Created: latency-svc-8xffq Aug 14 10:26:56.234: INFO: Got endpoints: latency-svc-8xffq [3.486429958s] Aug 14 10:26:56.491: INFO: Created: latency-svc-5ngct Aug 14 10:26:56.495: INFO: Got endpoints: latency-svc-5ngct [3.654316313s] Aug 14 10:26:56.682: INFO: Created: latency-svc-286dw Aug 14 10:26:56.695: INFO: Got endpoints: latency-svc-286dw [3.619502079s] Aug 14 10:26:56.762: INFO: Created: latency-svc-v76cl Aug 14 10:26:56.774: INFO: Got endpoints: latency-svc-v76cl [3.681662242s] Aug 14 10:26:56.850: INFO: Created: latency-svc-9wwbp Aug 14 10:26:56.853: INFO: Got endpoints: latency-svc-9wwbp [3.614970339s] Aug 14 10:26:57.011: INFO: Created: latency-svc-rkxx8 Aug 14 10:26:57.017: INFO: Got endpoints: latency-svc-rkxx8 [3.608610319s] Aug 14 10:26:57.062: INFO: Created: latency-svc-5rrxm Aug 14 10:26:57.082: INFO: Got endpoints: latency-svc-5rrxm [3.624720157s] Aug 14 10:26:57.111: INFO: Created: latency-svc-hz45p Aug 14 10:26:57.161: INFO: Got endpoints: latency-svc-hz45p [3.5812576s] Aug 14 10:26:57.230: INFO: Created: latency-svc-rks7f Aug 14 10:26:57.250: INFO: Got endpoints: latency-svc-rks7f [3.364174551s] Aug 14 10:26:57.343: INFO: Created: latency-svc-qsfv4 Aug 14 10:26:57.365: INFO: Got endpoints: latency-svc-qsfv4 [3.343541512s] Aug 14 10:26:57.511: INFO: Created: latency-svc-7f2x4 Aug 14 10:26:57.527: INFO: Got endpoints: latency-svc-7f2x4 [3.313273461s] Aug 14 10:26:57.586: INFO: Created: latency-svc-8vr5j Aug 14 10:26:57.593: INFO: Got endpoints: latency-svc-8vr5j [2.514950983s] Aug 14 10:26:57.689: INFO: Created: latency-svc-mdx2s Aug 14 10:26:57.690: INFO: Got endpoints: latency-svc-mdx2s [1.858202786s] Aug 14 10:26:57.747: INFO: Created: latency-svc-dq2mj Aug 14 10:26:57.779: INFO: Got endpoints: latency-svc-dq2mj [1.881066196s] Aug 14 10:26:57.878: INFO: Created: latency-svc-2jxp4 Aug 14 10:26:57.905: INFO: Got endpoints: latency-svc-2jxp4 [1.851573809s] Aug 14 10:26:58.025: INFO: Created: latency-svc-mmnrp Aug 14 10:26:58.081: INFO: Got endpoints: latency-svc-mmnrp [1.847579596s] Aug 14 10:26:58.222: INFO: Created: latency-svc-2n784 Aug 14 10:26:58.224: INFO: Got endpoints: latency-svc-2n784 [1.728552593s] Aug 14 10:26:58.377: INFO: Created: latency-svc-5gfcg Aug 14 10:26:58.382: INFO: Got endpoints: latency-svc-5gfcg [1.686834838s] Aug 14 10:26:58.553: INFO: Created: latency-svc-r6qdg Aug 14 10:26:58.585: INFO: Got endpoints: latency-svc-r6qdg [1.810387975s] Aug 14 10:26:58.588: INFO: Created: latency-svc-hxsvx Aug 14 10:26:58.621: INFO: Got endpoints: latency-svc-hxsvx [1.76871241s] Aug 14 10:26:58.724: INFO: Created: latency-svc-7fbgc Aug 14 10:26:58.759: INFO: Got endpoints: latency-svc-7fbgc [1.741985714s] Aug 14 10:26:58.814: INFO: Created: latency-svc-rmlst Aug 14 10:26:58.928: INFO: Got endpoints: latency-svc-rmlst [1.845899647s] Aug 14 10:26:58.988: INFO: Created: latency-svc-q8bwr Aug 14 10:26:59.018: INFO: Got endpoints: latency-svc-q8bwr [1.856930391s] Aug 14 10:26:59.109: INFO: Created: latency-svc-2l8fs Aug 14 10:26:59.111: INFO: Got endpoints: latency-svc-2l8fs [1.860416315s] Aug 14 10:26:59.139: INFO: Created: latency-svc-knjjh Aug 14 10:26:59.161: INFO: Got endpoints: latency-svc-knjjh [1.795641775s] Aug 14 10:26:59.197: INFO: Created: latency-svc-6nrvp Aug 14 10:26:59.256: INFO: Got endpoints: latency-svc-6nrvp [1.728891067s] Aug 14 10:26:59.336: INFO: Created: latency-svc-lfjp6 Aug 14 10:26:59.394: INFO: Got endpoints: latency-svc-lfjp6 [1.801206269s] Aug 14 10:26:59.456: INFO: Created: latency-svc-t5x5f Aug 14 10:26:59.492: INFO: Got endpoints: latency-svc-t5x5f [1.802100806s] Aug 14 10:26:59.666: INFO: Created: latency-svc-876jc Aug 14 10:26:59.850: INFO: Got endpoints: latency-svc-876jc [2.070809377s] Aug 14 10:27:00.035: INFO: Created: latency-svc-6p6n2 Aug 14 10:27:00.039: INFO: Got endpoints: latency-svc-6p6n2 [2.133365937s] Aug 14 10:27:00.293: INFO: Created: latency-svc-wcdzz Aug 14 10:27:00.296: INFO: Got endpoints: latency-svc-wcdzz [2.214536695s] Aug 14 10:27:00.521: INFO: Created: latency-svc-8k4pv Aug 14 10:27:00.585: INFO: Got endpoints: latency-svc-8k4pv [2.360462897s] Aug 14 10:27:01.150: INFO: Created: latency-svc-wtkj9 Aug 14 10:27:01.671: INFO: Got endpoints: latency-svc-wtkj9 [3.288478462s] Aug 14 10:27:01.738: INFO: Created: latency-svc-5sg9z Aug 14 10:27:02.706: INFO: Got endpoints: latency-svc-5sg9z [4.121352557s] Aug 14 10:27:02.787: INFO: Created: latency-svc-b69th Aug 14 10:27:02.959: INFO: Got endpoints: latency-svc-b69th [4.337811923s] Aug 14 10:27:03.034: INFO: Created: latency-svc-rm69z Aug 14 10:27:03.055: INFO: Got endpoints: latency-svc-rm69z [4.295464103s] Aug 14 10:27:03.311: INFO: Created: latency-svc-tkqrk Aug 14 10:27:03.640: INFO: Got endpoints: latency-svc-tkqrk [4.71230183s] Aug 14 10:27:03.650: INFO: Created: latency-svc-glhkr Aug 14 10:27:03.850: INFO: Got endpoints: latency-svc-glhkr [4.831984701s] Aug 14 10:27:03.918: INFO: Created: latency-svc-5fw9z Aug 14 10:27:04.089: INFO: Got endpoints: latency-svc-5fw9z [4.978605885s] Aug 14 10:27:04.269: INFO: Created: latency-svc-kbc2t Aug 14 10:27:04.308: INFO: Got endpoints: latency-svc-kbc2t [5.147875675s] Aug 14 10:27:04.338: INFO: Created: latency-svc-nq6tz Aug 14 10:27:04.357: INFO: Got endpoints: latency-svc-nq6tz [5.101347381s] Aug 14 10:27:04.451: INFO: Created: latency-svc-7qxvm Aug 14 10:27:04.483: INFO: Got endpoints: latency-svc-7qxvm [5.089141436s] Aug 14 10:27:04.670: INFO: Created: latency-svc-lb2z4 Aug 14 10:27:04.671: INFO: Got endpoints: latency-svc-lb2z4 [5.178071694s] Aug 14 10:27:04.765: INFO: Created: latency-svc-5lwpx Aug 14 10:27:04.891: INFO: Got endpoints: latency-svc-5lwpx [5.041710702s] Aug 14 10:27:04.894: INFO: Created: latency-svc-2qhjc Aug 14 10:27:05.203: INFO: Got endpoints: latency-svc-2qhjc [5.164853457s] Aug 14 10:27:05.213: INFO: Created: latency-svc-thmz6 Aug 14 10:27:05.282: INFO: Got endpoints: latency-svc-thmz6 [4.985979735s] Aug 14 10:27:05.421: INFO: Created: latency-svc-nnfc6 Aug 14 10:27:05.424: INFO: Got endpoints: latency-svc-nnfc6 [4.839488771s] Aug 14 10:27:05.826: INFO: Created: latency-svc-27jh8 Aug 14 10:27:06.054: INFO: Got endpoints: latency-svc-27jh8 [4.383443522s] Aug 14 10:27:06.056: INFO: Created: latency-svc-wfhrq Aug 14 10:27:06.086: INFO: Got endpoints: latency-svc-wfhrq [3.379662784s] Aug 14 10:27:06.126: INFO: Created: latency-svc-6pz8d Aug 14 10:27:06.694: INFO: Got endpoints: latency-svc-6pz8d [3.734754157s] Aug 14 10:27:07.087: INFO: Created: latency-svc-lcsbl Aug 14 10:27:07.263: INFO: Got endpoints: latency-svc-lcsbl [4.20746258s] Aug 14 10:27:07.270: INFO: Created: latency-svc-4d4c5 Aug 14 10:27:07.301: INFO: Got endpoints: latency-svc-4d4c5 [3.661012965s] Aug 14 10:27:07.687: INFO: Created: latency-svc-vhzf6 Aug 14 10:27:07.710: INFO: Got endpoints: latency-svc-vhzf6 [3.859942484s] Aug 14 10:27:07.928: INFO: Created: latency-svc-zh7l4 Aug 14 10:27:07.946: INFO: Got endpoints: latency-svc-zh7l4 [3.856253403s] Aug 14 10:27:08.108: INFO: Created: latency-svc-jfc7s Aug 14 10:27:08.341: INFO: Got endpoints: latency-svc-jfc7s [4.03222317s] Aug 14 10:27:08.985: INFO: Created: latency-svc-5frbz Aug 14 10:27:09.610: INFO: Got endpoints: latency-svc-5frbz [5.25276569s] Aug 14 10:27:09.619: INFO: Created: latency-svc-rmr5q Aug 14 10:27:09.699: INFO: Got endpoints: latency-svc-rmr5q [5.216199449s] Aug 14 10:27:09.987: INFO: Created: latency-svc-pk625 Aug 14 10:27:10.143: INFO: Got endpoints: latency-svc-pk625 [5.471955306s] Aug 14 10:27:10.443: INFO: Created: latency-svc-bhdx5 Aug 14 10:27:10.491: INFO: Got endpoints: latency-svc-bhdx5 [5.599245909s] Aug 14 10:27:10.635: INFO: Created: latency-svc-xqfv6 Aug 14 10:27:10.648: INFO: Got endpoints: latency-svc-xqfv6 [5.444939129s] Aug 14 10:27:10.707: INFO: Created: latency-svc-kjml7 Aug 14 10:27:10.843: INFO: Got endpoints: latency-svc-kjml7 [5.561593842s] Aug 14 10:27:11.065: INFO: Created: latency-svc-b957x Aug 14 10:27:11.104: INFO: Got endpoints: latency-svc-b957x [5.679313482s] Aug 14 10:27:11.315: INFO: Created: latency-svc-f52dq Aug 14 10:27:11.366: INFO: Got endpoints: latency-svc-f52dq [5.312270618s] Aug 14 10:27:12.144: INFO: Created: latency-svc-9fc4x Aug 14 10:27:12.212: INFO: Got endpoints: latency-svc-9fc4x [6.126089041s] Aug 14 10:27:12.593: INFO: Created: latency-svc-tzccb Aug 14 10:27:12.731: INFO: Got endpoints: latency-svc-tzccb [6.036297632s] Aug 14 10:27:13.152: INFO: Created: latency-svc-zqnh4 Aug 14 10:27:13.180: INFO: Got endpoints: latency-svc-zqnh4 [5.917245449s] Aug 14 10:27:13.228: INFO: Created: latency-svc-nt8jr Aug 14 10:27:13.243: INFO: Got endpoints: latency-svc-nt8jr [5.942044s] Aug 14 10:27:13.329: INFO: Created: latency-svc-8xr6p Aug 14 10:27:13.369: INFO: Got endpoints: latency-svc-8xr6p [189.056954ms] Aug 14 10:27:13.540: INFO: Created: latency-svc-szr2f Aug 14 10:27:13.670: INFO: Got endpoints: latency-svc-szr2f [5.959376079s] Aug 14 10:27:13.868: INFO: Created: latency-svc-smvzf Aug 14 10:27:13.915: INFO: Got endpoints: latency-svc-smvzf [5.969316797s] Aug 14 10:27:14.176: INFO: Created: latency-svc-5xgx6 Aug 14 10:27:14.183: INFO: Got endpoints: latency-svc-5xgx6 [5.841729652s] Aug 14 10:27:14.378: INFO: Created: latency-svc-f74hd Aug 14 10:27:14.419: INFO: Got endpoints: latency-svc-f74hd [4.808768112s] Aug 14 10:27:15.050: INFO: Created: latency-svc-km9cb Aug 14 10:27:15.058: INFO: Got endpoints: latency-svc-km9cb [5.358847746s] Aug 14 10:27:15.354: INFO: Created: latency-svc-g5fzq Aug 14 10:27:16.077: INFO: Got endpoints: latency-svc-g5fzq [5.934809739s] Aug 14 10:27:16.336: INFO: Created: latency-svc-l86lw Aug 14 10:27:16.508: INFO: Got endpoints: latency-svc-l86lw [6.017388345s] Aug 14 10:27:16.550: INFO: Created: latency-svc-t4x8f Aug 14 10:27:16.784: INFO: Got endpoints: latency-svc-t4x8f [6.135591247s] Aug 14 10:27:17.348: INFO: Created: latency-svc-wxbld Aug 14 10:27:17.713: INFO: Got endpoints: latency-svc-wxbld [6.869266477s] Aug 14 10:27:18.206: INFO: Created: latency-svc-g6zq8 Aug 14 10:27:18.646: INFO: Got endpoints: latency-svc-g6zq8 [7.542460932s] Aug 14 10:27:19.005: INFO: Created: latency-svc-2gcqm Aug 14 10:27:19.245: INFO: Got endpoints: latency-svc-2gcqm [7.878391928s] Aug 14 10:27:19.845: INFO: Created: latency-svc-5shv6 Aug 14 10:27:19.999: INFO: Got endpoints: latency-svc-5shv6 [7.787014974s] Aug 14 10:27:20.358: INFO: Created: latency-svc-rqpmh Aug 14 10:27:20.369: INFO: Got endpoints: latency-svc-rqpmh [7.638659279s] Aug 14 10:27:20.453: INFO: Created: latency-svc-fxg24 Aug 14 10:27:20.664: INFO: Got endpoints: latency-svc-fxg24 [7.420739523s] Aug 14 10:27:20.976: INFO: Created: latency-svc-pgw2d Aug 14 10:27:21.377: INFO: Got endpoints: latency-svc-pgw2d [8.007570858s] Aug 14 10:27:21.683: INFO: Created: latency-svc-q2lhf Aug 14 10:27:21.891: INFO: Got endpoints: latency-svc-q2lhf [8.221798523s] Aug 14 10:27:22.432: INFO: Created: latency-svc-dtp5x Aug 14 10:27:22.750: INFO: Got endpoints: latency-svc-dtp5x [8.834534569s] Aug 14 10:27:22.810: INFO: Created: latency-svc-9qzdp Aug 14 10:27:23.076: INFO: Got endpoints: latency-svc-9qzdp [8.892914065s] Aug 14 10:27:23.533: INFO: Created: latency-svc-b2jf5 Aug 14 10:27:23.539: INFO: Got endpoints: latency-svc-b2jf5 [9.120207235s] Aug 14 10:27:23.621: INFO: Created: latency-svc-v2wpm Aug 14 10:27:23.802: INFO: Got endpoints: latency-svc-v2wpm [8.744198695s] Aug 14 10:27:24.133: INFO: Created: latency-svc-sfdnd Aug 14 10:27:24.347: INFO: Got endpoints: latency-svc-sfdnd [8.269152206s] Aug 14 10:27:24.414: INFO: Created: latency-svc-4ltvp Aug 14 10:27:24.514: INFO: Got endpoints: latency-svc-4ltvp [8.005907978s] Aug 14 10:27:24.581: INFO: Created: latency-svc-zhfbv Aug 14 10:27:24.582: INFO: Got endpoints: latency-svc-zhfbv [7.797608762s] Aug 14 10:27:24.672: INFO: Created: latency-svc-hfzw7 Aug 14 10:27:24.703: INFO: Got endpoints: latency-svc-hfzw7 [6.990137264s] Aug 14 10:27:24.765: INFO: Created: latency-svc-kbk84 Aug 14 10:27:24.910: INFO: Got endpoints: latency-svc-kbk84 [6.263155175s] Aug 14 10:27:25.009: INFO: Created: latency-svc-b5dgj Aug 14 10:27:25.215: INFO: Got endpoints: latency-svc-b5dgj [5.97052862s] Aug 14 10:27:25.273: INFO: Created: latency-svc-4s5b9 Aug 14 10:27:25.285: INFO: Got endpoints: latency-svc-4s5b9 [5.285793119s] Aug 14 10:27:25.358: INFO: Created: latency-svc-chq6j Aug 14 10:27:25.394: INFO: Got endpoints: latency-svc-chq6j [5.024959227s] Aug 14 10:27:25.430: INFO: Created: latency-svc-khtmz Aug 14 10:27:25.447: INFO: Got endpoints: latency-svc-khtmz [4.783224788s] Aug 14 10:27:25.510: INFO: Created: latency-svc-qf6jr Aug 14 10:27:25.514: INFO: Got endpoints: latency-svc-qf6jr [4.137036128s] Aug 14 10:27:25.543: INFO: Created: latency-svc-xsks4 Aug 14 10:27:25.562: INFO: Got endpoints: latency-svc-xsks4 [3.670836778s] Aug 14 10:27:25.605: INFO: Created: latency-svc-8mdhm Aug 14 10:27:25.682: INFO: Got endpoints: latency-svc-8mdhm [2.932073924s] Aug 14 10:27:25.684: INFO: Created: latency-svc-l8rsm Aug 14 10:27:25.728: INFO: Got endpoints: latency-svc-l8rsm [2.652024629s] Aug 14 10:27:25.759: INFO: Created: latency-svc-scktk Aug 14 10:27:25.774: INFO: Got endpoints: latency-svc-scktk [2.234333025s] Aug 14 10:27:25.874: INFO: Created: latency-svc-b7x2h Aug 14 10:27:25.885: INFO: Got endpoints: latency-svc-b7x2h [2.081940081s] Aug 14 10:27:26.114: INFO: Created: latency-svc-rnttg Aug 14 10:27:26.171: INFO: Got endpoints: latency-svc-rnttg [1.823848703s] Aug 14 10:27:26.486: INFO: Created: latency-svc-2l4xg Aug 14 10:27:26.915: INFO: Got endpoints: latency-svc-2l4xg [2.401307549s] Aug 14 10:27:27.146: INFO: Created: latency-svc-p8z78 Aug 14 10:27:27.201: INFO: Got endpoints: latency-svc-p8z78 [2.619425772s] Aug 14 10:27:28.230: INFO: Created: latency-svc-jvs47 Aug 14 10:27:28.238: INFO: Got endpoints: latency-svc-jvs47 [3.534506581s] Aug 14 10:27:28.557: INFO: Created: latency-svc-24fm8 Aug 14 10:27:28.796: INFO: Got endpoints: latency-svc-24fm8 [3.88617315s] Aug 14 10:27:28.850: INFO: Created: latency-svc-ptzgt Aug 14 10:27:28.981: INFO: Got endpoints: latency-svc-ptzgt [3.765802928s] Aug 14 10:27:29.032: INFO: Created: latency-svc-44v8h Aug 14 10:27:29.060: INFO: Got endpoints: latency-svc-44v8h [3.774490888s] Aug 14 10:27:29.151: INFO: Created: latency-svc-8tvfw Aug 14 10:27:29.168: INFO: Got endpoints: latency-svc-8tvfw [3.773494185s] Aug 14 10:27:29.321: INFO: Created: latency-svc-wg9dt Aug 14 10:27:29.602: INFO: Got endpoints: latency-svc-wg9dt [4.154291606s] Aug 14 10:27:29.958: INFO: Created: latency-svc-trv5l Aug 14 10:27:30.032: INFO: Got endpoints: latency-svc-trv5l [4.517862736s] Aug 14 10:27:30.174: INFO: Created: latency-svc-zh9jw Aug 14 10:27:30.658: INFO: Got endpoints: latency-svc-zh9jw [5.095437045s] Aug 14 10:27:30.750: INFO: Created: latency-svc-6dk6k Aug 14 10:27:30.885: INFO: Got endpoints: latency-svc-6dk6k [5.203446187s] Aug 14 10:27:31.126: INFO: Created: latency-svc-lhw9g Aug 14 10:27:31.129: INFO: Got endpoints: latency-svc-lhw9g [5.401300337s] Aug 14 10:27:31.547: INFO: Created: latency-svc-hth5f Aug 14 10:27:31.585: INFO: Got endpoints: latency-svc-hth5f [5.811600766s] Aug 14 10:27:31.833: INFO: Created: latency-svc-8zgpm Aug 14 10:27:32.376: INFO: Got endpoints: latency-svc-8zgpm [6.49131261s] Aug 14 10:27:32.377: INFO: Created: latency-svc-7wr9d Aug 14 10:27:32.629: INFO: Got endpoints: latency-svc-7wr9d [6.457944513s] Aug 14 10:27:32.862: INFO: Created: latency-svc-vgbd5 Aug 14 10:27:32.898: INFO: Got endpoints: latency-svc-vgbd5 [5.982016135s] Aug 14 10:27:33.429: INFO: Created: latency-svc-k5kvh Aug 14 10:27:33.874: INFO: Got endpoints: latency-svc-k5kvh [6.672898944s] Aug 14 10:27:34.285: INFO: Created: latency-svc-st8pz Aug 14 10:27:34.294: INFO: Got endpoints: latency-svc-st8pz [6.056531477s] Aug 14 10:27:34.627: INFO: Created: latency-svc-7ww4j Aug 14 10:27:34.816: INFO: Got endpoints: latency-svc-7ww4j [6.020629278s] Aug 14 10:27:34.879: INFO: Created: latency-svc-nvxpq Aug 14 10:27:34.995: INFO: Got endpoints: latency-svc-nvxpq [6.013179712s] Aug 14 10:27:34.999: INFO: Created: latency-svc-vbh49 Aug 14 10:27:35.020: INFO: Got endpoints: latency-svc-vbh49 [5.960349957s] Aug 14 10:27:35.173: INFO: Created: latency-svc-xvr98 Aug 14 10:27:35.194: INFO: Got endpoints: latency-svc-xvr98 [6.026137452s] Aug 14 10:27:35.258: INFO: Created: latency-svc-fh2xw Aug 14 10:27:35.273: INFO: Got endpoints: latency-svc-fh2xw [5.670964888s] Aug 14 10:27:35.354: INFO: Created: latency-svc-nxkfm Aug 14 10:27:35.355: INFO: Got endpoints: latency-svc-nxkfm [5.323712903s] Aug 14 10:27:35.556: INFO: Created: latency-svc-j5sbt Aug 14 10:27:35.585: INFO: Got endpoints: latency-svc-j5sbt [4.927584927s] Aug 14 10:27:36.109: INFO: Created: latency-svc-f7c68 Aug 14 10:27:36.118: INFO: Got endpoints: latency-svc-f7c68 [5.232867222s] Aug 14 10:27:36.354: INFO: Created: latency-svc-2xz2z Aug 14 10:27:36.434: INFO: Created: latency-svc-pz82g Aug 14 10:27:36.434: INFO: Got endpoints: latency-svc-2xz2z [5.304691439s] Aug 14 10:27:36.754: INFO: Got endpoints: latency-svc-pz82g [5.168653975s] Aug 14 10:27:37.101: INFO: Created: latency-svc-cz4d5 Aug 14 10:27:37.127: INFO: Got endpoints: latency-svc-cz4d5 [4.751013244s] Aug 14 10:27:37.510: INFO: Created: latency-svc-whv2h Aug 14 10:27:37.557: INFO: Got endpoints: latency-svc-whv2h [4.928586438s] Aug 14 10:27:38.043: INFO: Created: latency-svc-m2dpg Aug 14 10:27:38.134: INFO: Got endpoints: latency-svc-m2dpg [5.235922562s] Aug 14 10:27:38.546: INFO: Created: latency-svc-rdlrf Aug 14 10:27:38.571: INFO: Got endpoints: latency-svc-rdlrf [4.696945865s] Aug 14 10:27:38.573: INFO: Created: latency-svc-r7767 Aug 14 10:27:38.602: INFO: Got endpoints: latency-svc-r7767 [4.307849202s] Aug 14 10:27:38.706: INFO: Created: latency-svc-59hdb Aug 14 10:27:38.747: INFO: Got endpoints: latency-svc-59hdb [3.930182732s] Aug 14 10:27:38.959: INFO: Created: latency-svc-g92gs Aug 14 10:27:38.963: INFO: Got endpoints: latency-svc-g92gs [3.968508031s] Aug 14 10:27:39.103: INFO: Created: latency-svc-xzd4p Aug 14 10:27:39.106: INFO: Got endpoints: latency-svc-xzd4p [4.085636406s] Aug 14 10:27:39.137: INFO: Created: latency-svc-ln2pf Aug 14 10:27:39.191: INFO: Got endpoints: latency-svc-ln2pf [3.996401774s] Aug 14 10:27:39.305: INFO: Created: latency-svc-8kqw6 Aug 14 10:27:39.406: INFO: Got endpoints: latency-svc-8kqw6 [4.133511923s] Aug 14 10:27:39.413: INFO: Created: latency-svc-nlthq Aug 14 10:27:39.419: INFO: Got endpoints: latency-svc-nlthq [4.063571271s] Aug 14 10:27:39.442: INFO: Created: latency-svc-z799z Aug 14 10:27:39.478: INFO: Got endpoints: latency-svc-z799z [3.892742655s] Aug 14 10:27:39.559: INFO: Created: latency-svc-d8rb8 Aug 14 10:27:39.575: INFO: Got endpoints: latency-svc-d8rb8 [3.456758853s] Aug 14 10:27:39.768: INFO: Created: latency-svc-qldcc Aug 14 10:27:39.771: INFO: Got endpoints: latency-svc-qldcc [3.337175623s] Aug 14 10:27:40.006: INFO: Created: latency-svc-9z6dr Aug 14 10:27:40.026: INFO: Got endpoints: latency-svc-9z6dr [3.272030781s] Aug 14 10:27:40.097: INFO: Created: latency-svc-qvx8c Aug 14 10:27:40.221: INFO: Got endpoints: latency-svc-qvx8c [3.094138992s] Aug 14 10:27:40.242: INFO: Created: latency-svc-8ksjd Aug 14 10:27:40.724: INFO: Got endpoints: latency-svc-8ksjd [3.167029296s] Aug 14 10:27:42.388: INFO: Created: latency-svc-cc6mq Aug 14 10:27:42.575: INFO: Got endpoints: latency-svc-cc6mq [4.441408835s] Aug 14 10:27:42.591: INFO: Created: latency-svc-wxk7h Aug 14 10:27:42.965: INFO: Got endpoints: latency-svc-wxk7h [4.393416633s] Aug 14 10:27:42.970: INFO: Created: latency-svc-q6svm Aug 14 10:27:43.221: INFO: Got endpoints: latency-svc-q6svm [4.619356541s] Aug 14 10:27:43.301: INFO: Created: latency-svc-sf7bx Aug 14 10:27:43.401: INFO: Got endpoints: latency-svc-sf7bx [4.654033304s] Aug 14 10:27:43.427: INFO: Created: latency-svc-rvjnx Aug 14 10:27:43.858: INFO: Got endpoints: latency-svc-rvjnx [4.89505905s] Aug 14 10:27:44.091: INFO: Created: latency-svc-mbrzx Aug 14 10:27:44.103: INFO: Got endpoints: latency-svc-mbrzx [4.997460155s] Aug 14 10:27:44.472: INFO: Created: latency-svc-gmv2t Aug 14 10:27:44.592: INFO: Got endpoints: latency-svc-gmv2t [5.401573197s] Aug 14 10:27:44.822: INFO: Created: latency-svc-xvnzj Aug 14 10:27:44.982: INFO: Got endpoints: latency-svc-xvnzj [5.575708762s] Aug 14 10:27:44.982: INFO: Created: latency-svc-krq2w Aug 14 10:27:44.996: INFO: Got endpoints: latency-svc-krq2w [5.577371344s] Aug 14 10:27:45.038: INFO: Created: latency-svc-vp6kd Aug 14 10:27:45.045: INFO: Got endpoints: latency-svc-vp6kd [5.566257265s] Aug 14 10:27:45.144: INFO: Created: latency-svc-dwjkl Aug 14 10:27:45.191: INFO: Created: latency-svc-58jt5 Aug 14 10:27:45.191: INFO: Got endpoints: latency-svc-dwjkl [5.615939548s] Aug 14 10:27:45.305: INFO: Got endpoints: latency-svc-58jt5 [5.533782437s] Aug 14 10:27:45.307: INFO: Created: latency-svc-9cg2s Aug 14 10:27:45.346: INFO: Got endpoints: latency-svc-9cg2s [5.319876556s] Aug 14 10:27:45.618: INFO: Created: latency-svc-fdtdr Aug 14 10:27:45.808: INFO: Got endpoints: latency-svc-fdtdr [5.586774468s] Aug 14 10:27:45.812: INFO: Created: latency-svc-9fkbw Aug 14 10:27:45.820: INFO: Got endpoints: latency-svc-9fkbw [5.095332358s] Aug 14 10:27:45.852: INFO: Created: latency-svc-zz266 Aug 14 10:27:45.894: INFO: Got endpoints: latency-svc-zz266 [3.319003129s] Aug 14 10:27:45.964: INFO: Created: latency-svc-4flcz Aug 14 10:27:45.996: INFO: Got endpoints: latency-svc-4flcz [3.031400747s] Aug 14 10:27:46.056: INFO: Created: latency-svc-6gspm Aug 14 10:27:46.103: INFO: Got endpoints: latency-svc-6gspm [2.881745819s] Aug 14 10:27:46.148: INFO: Created: latency-svc-nvt56 Aug 14 10:27:46.165: INFO: Got endpoints: latency-svc-nvt56 [2.763660375s] Aug 14 10:27:46.246: INFO: Created: latency-svc-9qddv Aug 14 10:27:46.832: INFO: Got endpoints: latency-svc-9qddv [2.973255221s] Aug 14 10:27:46.958: INFO: Created: latency-svc-ww6vn Aug 14 10:27:47.009: INFO: Got endpoints: latency-svc-ww6vn [2.90614557s] Aug 14 10:27:47.052: INFO: Created: latency-svc-v79sw Aug 14 10:27:47.185: INFO: Got endpoints: latency-svc-v79sw [2.593109506s] Aug 14 10:27:47.407: INFO: Created: latency-svc-mwk9c Aug 14 10:27:47.453: INFO: Got endpoints: latency-svc-mwk9c [2.470544018s] Aug 14 10:27:47.701: INFO: Created: latency-svc-vz64q Aug 14 10:27:47.703: INFO: Got endpoints: latency-svc-vz64q [2.70692518s] Aug 14 10:27:47.905: INFO: Created: latency-svc-l65zq Aug 14 10:27:47.949: INFO: Got endpoints: latency-svc-l65zq [2.90441708s] Aug 14 10:27:48.120: INFO: Created: latency-svc-8ss6n Aug 14 10:27:48.135: INFO: Got endpoints: latency-svc-8ss6n [2.944158584s] Aug 14 10:27:48.205: INFO: Created: latency-svc-xz2lj Aug 14 10:27:48.294: INFO: Got endpoints: latency-svc-xz2lj [2.988676861s] Aug 14 10:27:48.337: INFO: Created: latency-svc-hx2x7 Aug 14 10:27:48.361: INFO: Got endpoints: latency-svc-hx2x7 [3.014468408s] Aug 14 10:27:48.650: INFO: Created: latency-svc-qbp88 Aug 14 10:27:48.814: INFO: Got endpoints: latency-svc-qbp88 [3.005392392s] Aug 14 10:27:48.815: INFO: Created: latency-svc-8cmz6 Aug 14 10:27:48.826: INFO: Got endpoints: latency-svc-8cmz6 [3.005897169s] Aug 14 10:27:48.871: INFO: Created: latency-svc-cxbgq Aug 14 10:27:48.886: INFO: Got endpoints: latency-svc-cxbgq [2.99226933s] Aug 14 10:27:48.970: INFO: Created: latency-svc-b9zmv Aug 14 10:27:49.003: INFO: Got endpoints: latency-svc-b9zmv [3.00693568s] Aug 14 10:27:49.021: INFO: Created: latency-svc-kw9x7 Aug 14 10:27:49.036: INFO: Got endpoints: latency-svc-kw9x7 [2.932429082s] Aug 14 10:27:49.057: INFO: Created: latency-svc-k4242 Aug 14 10:27:49.150: INFO: Got endpoints: latency-svc-k4242 [2.985070621s] Aug 14 10:27:49.151: INFO: Created: latency-svc-f8g5n Aug 14 10:27:49.162: INFO: Got endpoints: latency-svc-f8g5n [2.3305053s] Aug 14 10:27:49.162: INFO: Latencies: [189.056954ms 190.867444ms 269.100769ms 362.982658ms 596.633196ms 613.345062ms 758.613527ms 930.099235ms 977.212734ms 1.100101296s 1.406544716s 1.541272867s 1.686834838s 1.728552593s 1.728891067s 1.733551565s 1.741985714s 1.76871241s 1.795641775s 1.801206269s 1.802100806s 1.810387975s 1.823848703s 1.845899647s 1.847579596s 1.851573809s 1.856930391s 1.858202786s 1.860416315s 1.881066196s 2.070809377s 2.081940081s 2.133365937s 2.214536695s 2.234333025s 2.3305053s 2.360462897s 2.401307549s 2.470544018s 2.514950983s 2.593109506s 2.597352935s 2.619425772s 2.652024629s 2.70692518s 2.763660375s 2.881745819s 2.90441708s 2.90614557s 2.932073924s 2.932429082s 2.944158584s 2.973255221s 2.985070621s 2.988676861s 2.99226933s 3.005392392s 3.005897169s 3.00693568s 3.014468408s 3.031400747s 3.094138992s 3.167029296s 3.272030781s 3.288478462s 3.313273461s 3.319003129s 3.337175623s 3.343541512s 3.351502356s 3.364174551s 3.379662784s 3.384954064s 3.417054353s 3.456758853s 3.486429958s 3.534506581s 3.5812576s 3.608610319s 3.614970339s 3.619502079s 3.624720157s 3.654316313s 3.661012965s 3.670836778s 3.681662242s 3.734754157s 3.765802928s 3.773494185s 3.774490888s 3.856253403s 3.859942484s 3.88617315s 3.892742655s 3.930182732s 3.968508031s 3.996401774s 4.03222317s 4.063571271s 4.085636406s 4.121352557s 4.133511923s 4.137036128s 4.154291606s 4.20746258s 4.295464103s 4.307849202s 4.337811923s 4.383443522s 4.393416633s 4.441408835s 4.517862736s 4.619356541s 4.654033304s 4.696945865s 4.71230183s 4.751013244s 4.783224788s 4.808768112s 4.831984701s 4.839488771s 4.89505905s 4.927584927s 4.928586438s 4.978605885s 4.985979735s 4.997460155s 5.024959227s 5.041710702s 5.089141436s 5.095332358s 5.095437045s 5.101347381s 5.147875675s 5.164853457s 5.168653975s 5.178071694s 5.203446187s 5.216199449s 5.232867222s 5.235922562s 5.25276569s 5.285793119s 5.304691439s 5.312270618s 5.319876556s 5.323712903s 5.358847746s 5.401300337s 5.401573197s 5.444939129s 5.471955306s 5.533782437s 5.561593842s 5.566257265s 5.575708762s 5.577371344s 5.586774468s 5.599245909s 5.615939548s 5.670964888s 5.679313482s 5.811600766s 5.841729652s 5.917245449s 5.934809739s 5.942044s 5.959376079s 5.960349957s 5.969316797s 5.97052862s 5.982016135s 6.013179712s 6.017388345s 6.020629278s 6.026137452s 6.036297632s 6.056531477s 6.126089041s 6.135591247s 6.263155175s 6.457944513s 6.49131261s 6.672898944s 6.869266477s 6.990137264s 7.420739523s 7.542460932s 7.638659279s 7.787014974s 7.797608762s 7.878391928s 8.005907978s 8.007570858s 8.221798523s 8.269152206s 8.744198695s 8.834534569s 8.892914065s 9.120207235s] Aug 14 10:27:49.162: INFO: 50 %ile: 4.121352557s Aug 14 10:27:49.162: INFO: 90 %ile: 6.263155175s Aug 14 10:27:49.162: INFO: 99 %ile: 8.892914065s Aug 14 10:27:49.162: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:27:49.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3592" for this suite. Aug 14 10:29:13.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:29:13.704: INFO: namespace svc-latency-3592 deletion completed in 1m24.535420011s • [SLOW TEST:152.578 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:29:13.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 14 10:29:22.223: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:29:23.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3753" for this suite. Aug 14 10:29:32.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:29:32.379: INFO: namespace container-runtime-3753 deletion completed in 9.294840575s • [SLOW TEST:18.674 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:29:32.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-5bec130c-14ba-412a-845d-f73175eb5db5 STEP: Creating a pod to test consume secrets Aug 14 10:29:33.865: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c" in namespace "projected-2275" to be "success or failure" Aug 14 10:29:34.146: INFO: Pod "pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c": Phase="Pending", Reason="", readiness=false. Elapsed: 280.805916ms Aug 14 10:29:36.380: INFO: Pod "pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.51443354s Aug 14 10:29:38.860: INFO: Pod "pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.994246411s Aug 14 10:29:40.863: INFO: Pod "pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.99769235s Aug 14 10:29:43.170: INFO: Pod "pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.304442522s Aug 14 10:29:45.224: INFO: Pod "pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c": Phase="Running", Reason="", readiness=true. Elapsed: 11.358438789s Aug 14 10:29:47.228: INFO: Pod "pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.362328697s STEP: Saw pod success Aug 14 10:29:47.228: INFO: Pod "pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c" satisfied condition "success or failure" Aug 14 10:29:47.283: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c container projected-secret-volume-test: STEP: delete the pod Aug 14 10:29:47.482: INFO: Waiting for pod pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c to disappear Aug 14 10:29:47.601: INFO: Pod pod-projected-secrets-0354ff7c-f5f2-4743-a8b9-913a1045c66c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:29:47.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2275" for this suite. Aug 14 10:29:55.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:29:57.735: INFO: namespace projected-2275 deletion completed in 10.131479021s • [SLOW TEST:25.356 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:29:57.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Aug 14 10:29:58.398: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Aug 14 10:29:58.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4679' Aug 14 10:29:59.210: INFO: stderr: "" Aug 14 10:29:59.210: INFO: stdout: "service/redis-slave created\n" Aug 14 10:29:59.210: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Aug 14 10:29:59.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4679' Aug 14 10:29:59.685: INFO: stderr: "" Aug 14 10:29:59.685: INFO: stdout: "service/redis-master created\n" Aug 14 10:29:59.685: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 14 10:29:59.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4679' Aug 14 10:30:00.068: INFO: stderr: "" Aug 14 10:30:00.068: INFO: stdout: "service/frontend created\n" Aug 14 10:30:00.068: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Aug 14 10:30:00.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4679' Aug 14 10:30:00.379: INFO: stderr: "" Aug 14 10:30:00.379: INFO: stdout: "deployment.apps/frontend created\n" Aug 14 10:30:00.379: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 14 10:30:00.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4679' Aug 14 10:30:00.908: INFO: stderr: "" Aug 14 10:30:00.908: INFO: stdout: "deployment.apps/redis-master created\n" Aug 14 10:30:00.908: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Aug 14 10:30:00.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4679' Aug 14 10:30:01.306: INFO: stderr: "" Aug 14 10:30:01.306: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Aug 14 10:30:01.306: INFO: Waiting for all frontend pods to be Running. Aug 14 10:30:16.356: INFO: Waiting for frontend to serve content. Aug 14 10:30:16.381: INFO: Trying to add a new entry to the guestbook. Aug 14 10:30:16.394: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 14 10:30:16.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4679' Aug 14 10:30:16.586: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 10:30:16.586: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Aug 14 10:30:16.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4679' Aug 14 10:30:16.817: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 10:30:16.817: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 14 10:30:16.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4679' Aug 14 10:30:17.098: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 10:30:17.098: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 14 10:30:17.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4679' Aug 14 10:30:17.224: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 10:30:17.224: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 14 10:30:17.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4679' Aug 14 10:30:17.337: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 10:30:17.337: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 14 10:30:17.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4679' Aug 14 10:30:17.536: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 10:30:17.536: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:30:17.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4679" for this suite. Aug 14 10:31:00.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:31:00.802: INFO: namespace kubectl-4679 deletion completed in 43.143958008s • [SLOW TEST:63.066 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:31:00.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 14 10:31:02.513: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b238e82-b329-4204-b128-d46fad507903" in namespace "downward-api-5536" to be "success or failure" Aug 14 10:31:02.535: INFO: Pod "downwardapi-volume-7b238e82-b329-4204-b128-d46fad507903": Phase="Pending", Reason="", readiness=false. Elapsed: 21.926081ms Aug 14 10:31:04.704: INFO: Pod "downwardapi-volume-7b238e82-b329-4204-b128-d46fad507903": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191114998s Aug 14 10:31:06.884: INFO: Pod "downwardapi-volume-7b238e82-b329-4204-b128-d46fad507903": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371341455s Aug 14 10:31:08.888: INFO: Pod "downwardapi-volume-7b238e82-b329-4204-b128-d46fad507903": Phase="Pending", Reason="", readiness=false. Elapsed: 6.374912601s Aug 14 10:31:11.111: INFO: Pod "downwardapi-volume-7b238e82-b329-4204-b128-d46fad507903": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.598637091s STEP: Saw pod success Aug 14 10:31:11.112: INFO: Pod "downwardapi-volume-7b238e82-b329-4204-b128-d46fad507903" satisfied condition "success or failure" Aug 14 10:31:11.114: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7b238e82-b329-4204-b128-d46fad507903 container client-container: STEP: delete the pod Aug 14 10:31:11.357: INFO: Waiting for pod downwardapi-volume-7b238e82-b329-4204-b128-d46fad507903 to disappear Aug 14 10:31:11.368: INFO: Pod downwardapi-volume-7b238e82-b329-4204-b128-d46fad507903 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:31:11.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5536" for this suite. Aug 14 10:31:17.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:31:17.856: INFO: namespace downward-api-5536 deletion completed in 6.485523109s • [SLOW TEST:17.054 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:31:17.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-fc4d3883-c342-43b0-92b4-503fb2c81471 STEP: Creating a pod to test consume configMaps Aug 14 10:31:18.291: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c1debde8-16c6-412f-bab9-5cd88e7e435e" in namespace "projected-9561" to be "success or failure" Aug 14 10:31:18.795: INFO: Pod "pod-projected-configmaps-c1debde8-16c6-412f-bab9-5cd88e7e435e": Phase="Pending", Reason="", readiness=false. Elapsed: 503.736344ms Aug 14 10:31:20.836: INFO: Pod "pod-projected-configmaps-c1debde8-16c6-412f-bab9-5cd88e7e435e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.545077755s Aug 14 10:31:22.923: INFO: Pod "pod-projected-configmaps-c1debde8-16c6-412f-bab9-5cd88e7e435e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.631642305s Aug 14 10:31:24.962: INFO: Pod "pod-projected-configmaps-c1debde8-16c6-412f-bab9-5cd88e7e435e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.671015648s Aug 14 10:31:27.244: INFO: Pod "pod-projected-configmaps-c1debde8-16c6-412f-bab9-5cd88e7e435e": Phase="Running", Reason="", readiness=true. Elapsed: 8.952995743s Aug 14 10:31:29.249: INFO: Pod "pod-projected-configmaps-c1debde8-16c6-412f-bab9-5cd88e7e435e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.957613172s STEP: Saw pod success Aug 14 10:31:29.249: INFO: Pod "pod-projected-configmaps-c1debde8-16c6-412f-bab9-5cd88e7e435e" satisfied condition "success or failure" Aug 14 10:31:29.252: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-c1debde8-16c6-412f-bab9-5cd88e7e435e container projected-configmap-volume-test: STEP: delete the pod Aug 14 10:31:29.954: INFO: Waiting for pod pod-projected-configmaps-c1debde8-16c6-412f-bab9-5cd88e7e435e to disappear Aug 14 10:31:30.118: INFO: Pod pod-projected-configmaps-c1debde8-16c6-412f-bab9-5cd88e7e435e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:31:30.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9561" for this suite. Aug 14 10:31:36.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:31:36.894: INFO: namespace projected-9561 deletion completed in 6.771898263s • [SLOW TEST:19.038 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:31:36.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 14 10:31:38.684: INFO: Waiting up to 5m0s for pod "downward-api-e8c609e2-61ec-4687-8e12-bd4015742055" in namespace "downward-api-6485" to be "success or failure" Aug 14 10:31:38.944: INFO: Pod "downward-api-e8c609e2-61ec-4687-8e12-bd4015742055": Phase="Pending", Reason="", readiness=false. Elapsed: 260.751236ms Aug 14 10:31:40.948: INFO: Pod "downward-api-e8c609e2-61ec-4687-8e12-bd4015742055": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26409469s Aug 14 10:31:43.430: INFO: Pod "downward-api-e8c609e2-61ec-4687-8e12-bd4015742055": Phase="Pending", Reason="", readiness=false. Elapsed: 4.746224752s Aug 14 10:31:45.434: INFO: Pod "downward-api-e8c609e2-61ec-4687-8e12-bd4015742055": Phase="Running", Reason="", readiness=true. Elapsed: 6.750216868s Aug 14 10:31:47.442: INFO: Pod "downward-api-e8c609e2-61ec-4687-8e12-bd4015742055": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.758030408s STEP: Saw pod success Aug 14 10:31:47.442: INFO: Pod "downward-api-e8c609e2-61ec-4687-8e12-bd4015742055" satisfied condition "success or failure" Aug 14 10:31:47.459: INFO: Trying to get logs from node iruya-worker pod downward-api-e8c609e2-61ec-4687-8e12-bd4015742055 container dapi-container: STEP: delete the pod Aug 14 10:31:47.677: INFO: Waiting for pod downward-api-e8c609e2-61ec-4687-8e12-bd4015742055 to disappear Aug 14 10:31:47.741: INFO: Pod downward-api-e8c609e2-61ec-4687-8e12-bd4015742055 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:31:47.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6485" for this suite. Aug 14 10:31:55.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:31:56.121: INFO: namespace downward-api-6485 deletion completed in 8.377187894s • [SLOW TEST:19.227 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:31:56.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 14 10:32:05.546: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:32:05.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6197" for this suite. Aug 14 10:32:17.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:32:17.973: INFO: namespace container-runtime-6197 deletion completed in 12.17025144s • [SLOW TEST:21.851 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:32:17.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 14 10:32:18.018: INFO: Creating deployment "nginx-deployment" Aug 14 10:32:18.022: INFO: Waiting for observed generation 1 Aug 14 10:32:20.028: INFO: Waiting for all required pods to come up Aug 14 10:32:20.032: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 14 10:32:34.042: INFO: Waiting for deployment "nginx-deployment" to complete Aug 14 10:32:34.048: INFO: Updating deployment "nginx-deployment" with a non-existent image Aug 14 10:32:34.053: INFO: Updating deployment nginx-deployment Aug 14 10:32:34.053: INFO: Waiting for observed generation 2 Aug 14 10:32:36.221: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 14 10:32:36.223: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 14 10:32:36.225: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 14 10:32:36.232: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 14 10:32:36.232: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 14 10:32:36.234: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 14 10:32:36.237: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Aug 14 10:32:36.237: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Aug 14 10:32:36.242: INFO: Updating deployment nginx-deployment Aug 14 10:32:36.242: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Aug 14 10:32:37.057: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 14 10:32:37.628: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 14 10:32:38.632: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-3510,SelfLink:/apis/apps/v1/namespaces/deployment-3510/deployments/nginx-deployment,UID:d1c62e25-9229-48ab-a9a9-fcaadb683720,ResourceVersion:4868551,Generation:3,CreationTimestamp:2020-08-14 10:32:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-08-14 10:32:35 +0000 UTC 2020-08-14 10:32:18 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-08-14 10:32:37 +0000 UTC 2020-08-14 10:32:37 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Aug 14 10:32:38.833: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-3510,SelfLink:/apis/apps/v1/namespaces/deployment-3510/replicasets/nginx-deployment-55fb7cb77f,UID:31af9ed6-35e9-4193-b7be-5d075232773c,ResourceVersion:4868569,Generation:3,CreationTimestamp:2020-08-14 10:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d1c62e25-9229-48ab-a9a9-fcaadb683720 0xc0024365a7 0xc0024365a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 14 10:32:38.833: INFO: All old ReplicaSets of Deployment "nginx-deployment": Aug 14 10:32:38.834: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-3510,SelfLink:/apis/apps/v1/namespaces/deployment-3510/replicasets/nginx-deployment-7b8c6f4498,UID:5f668576-4972-4d2b-b0fd-b49c7fe0e4fd,ResourceVersion:4868554,Generation:3,CreationTimestamp:2020-08-14 10:32:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d1c62e25-9229-48ab-a9a9-fcaadb683720 0xc002436677 0xc002436678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Aug 14 10:32:38.975: INFO: Pod "nginx-deployment-55fb7cb77f-8nsz8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8nsz8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-8nsz8,UID:33d7d069-50a5-48c1-bd70-7afa2c4e9b51,ResourceVersion:4868496,Generation:0,CreationTimestamp:2020-08-14 10:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc002437007 0xc002437008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002437080} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024370a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-14 10:32:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.975: INFO: Pod "nginx-deployment-55fb7cb77f-bzzxz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bzzxz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-bzzxz,UID:c9d1fd42-ce1c-4281-b29b-2d67f0260bcd,ResourceVersion:4868469,Generation:0,CreationTimestamp:2020-08-14 10:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc002437170 0xc002437171}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024371f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002437210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-14 10:32:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.975: INFO: Pod "nginx-deployment-55fb7cb77f-fq4vt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fq4vt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-fq4vt,UID:10124c09-f789-4353-a42e-269315e092d2,ResourceVersion:4868481,Generation:0,CreationTimestamp:2020-08-14 10:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc0024372e0 0xc0024372e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002437360} {node.kubernetes.io/unreachable Exists NoExecute 0xc002437380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-14 10:32:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.976: INFO: Pod "nginx-deployment-55fb7cb77f-hzxnq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hzxnq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-hzxnq,UID:0fb90212-efab-4950-a281-c446840ff1c8,ResourceVersion:4868559,Generation:0,CreationTimestamp:2020-08-14 10:32:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc002437450 0xc002437451}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024374d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024374f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.976: INFO: Pod "nginx-deployment-55fb7cb77f-kknhc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kknhc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-kknhc,UID:05f9fe44-c452-478c-b509-fbd2ef1a727e,ResourceVersion:4868564,Generation:0,CreationTimestamp:2020-08-14 10:32:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc002437577 0xc002437578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024375f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002437610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.976: INFO: Pod "nginx-deployment-55fb7cb77f-l6r6m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l6r6m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-l6r6m,UID:52f1463a-1e58-4cdd-a3ef-17dc786e0d94,ResourceVersion:4868544,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc002437697 0xc002437698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002437710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002437730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.977: INFO: Pod "nginx-deployment-55fb7cb77f-lrb5f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lrb5f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-lrb5f,UID:0dda06a3-ff4f-48f8-8eb4-8fdc9fd60db3,ResourceVersion:4868523,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc0024377b7 0xc0024377b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002437830} {node.kubernetes.io/unreachable Exists NoExecute 0xc002437850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.977: INFO: Pod "nginx-deployment-55fb7cb77f-m2xhk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m2xhk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-m2xhk,UID:25de722e-9e69-4af3-81bb-87d8c9da92f5,ResourceVersion:4868558,Generation:0,CreationTimestamp:2020-08-14 10:32:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc0024378d7 0xc0024378d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002437950} {node.kubernetes.io/unreachable Exists NoExecute 0xc002437970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.977: INFO: Pod "nginx-deployment-55fb7cb77f-r2xpl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r2xpl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-r2xpl,UID:8503fe00-e0f8-44a2-93ba-40096a9b9360,ResourceVersion:4868561,Generation:0,CreationTimestamp:2020-08-14 10:32:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc0024379f7 0xc0024379f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002437a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002437a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.978: INFO: Pod "nginx-deployment-55fb7cb77f-t55q7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t55q7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-t55q7,UID:8481d6c9-bf48-4da8-be18-26ad2a792fe3,ResourceVersion:4868473,Generation:0,CreationTimestamp:2020-08-14 10:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc002437b17 0xc002437b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002437b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002437bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-14 10:32:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.978: INFO: Pod "nginx-deployment-55fb7cb77f-x4gpz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x4gpz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-x4gpz,UID:d572cdb7-13d0-4040-a1a1-10bafa26872b,ResourceVersion:4868543,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc002437c80 0xc002437c81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002437d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002437d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.978: INFO: Pod "nginx-deployment-55fb7cb77f-xrs6n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xrs6n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-xrs6n,UID:66f8da43-cfd7-4337-adb4-07ec7e63c04c,ResourceVersion:4868560,Generation:0,CreationTimestamp:2020-08-14 10:32:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc002437da7 0xc002437da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002437e20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002437e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.978: INFO: Pod "nginx-deployment-55fb7cb77f-zcg5z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zcg5z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-55fb7cb77f-zcg5z,UID:78639523-0df0-4b78-8cac-55122a70ae07,ResourceVersion:4868495,Generation:0,CreationTimestamp:2020-08-14 10:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 31af9ed6-35e9-4193-b7be-5d075232773c 0xc002437ec7 0xc002437ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002437f40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002437f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-14 10:32:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.979: INFO: Pod "nginx-deployment-7b8c6f4498-2rs78" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2rs78,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-2rs78,UID:d734d9a7-7d17-4833-9a80-7af8e9c85223,ResourceVersion:4868525,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc000ae01a0 0xc000ae01a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ae0360} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ae03a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.979: INFO: Pod "nginx-deployment-7b8c6f4498-47wl8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-47wl8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-47wl8,UID:cce28b37-9c8d-49fa-ad65-84865f2c6da7,ResourceVersion:4868412,Generation:0,CreationTimestamp:2020-08-14 10:32:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc000ae0de7 0xc000ae0de8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ae1420} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ae1440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.12,StartTime:2020-08-14 10:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-14 10:32:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2204f37044ab16ea893cd7eed63b2de0b29c7adb4617d7c9fce677fc371f0a89}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.979: INFO: Pod "nginx-deployment-7b8c6f4498-67ftd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-67ftd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-67ftd,UID:3c98bfdd-0128-45a5-a908-243965466688,ResourceVersion:4868552,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc000ae1e37 0xc000ae1e38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7e050} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7e070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-14 10:32:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.980: INFO: Pod "nginx-deployment-7b8c6f4498-9t22t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9t22t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-9t22t,UID:d7697fc6-4ed3-4bb8-88cb-87f04b1461bd,ResourceVersion:4868524,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7e137 0xc001d7e138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7e1b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7e1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.980: INFO: Pod "nginx-deployment-7b8c6f4498-blqch" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-blqch,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-blqch,UID:d2f2a0c5-46ab-4662-88eb-d7803f870ff7,ResourceVersion:4868547,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7e267 0xc001d7e268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7e2e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7e300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.980: INFO: Pod "nginx-deployment-7b8c6f4498-crs5g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-crs5g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-crs5g,UID:11bfb494-3ab9-4e30-a9a7-9346f96da4be,ResourceVersion:4868404,Generation:0,CreationTimestamp:2020-08-14 10:32:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7e387 0xc001d7e388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7e400} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7e420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.11,StartTime:2020-08-14 10:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-14 10:32:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4da0ce2601087e3d2b6c2c0d0c6f2ac4e972841e881a95aff94452931630daaa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.980: INFO: Pod "nginx-deployment-7b8c6f4498-cw8bm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cw8bm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-cw8bm,UID:8b8e42bd-c0e0-41d2-87e4-8000df907805,ResourceVersion:4868421,Generation:0,CreationTimestamp:2020-08-14 10:32:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7e4f7 0xc001d7e4f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7e570} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7e590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.13,StartTime:2020-08-14 10:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-14 10:32:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a28248d89323b482f6d905541b1fe86cf689bb163e3cb954792ebb03cb5f8bb8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.981: INFO: Pod "nginx-deployment-7b8c6f4498-d8pfr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d8pfr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-d8pfr,UID:37cc6473-99cb-46c3-9d23-861fd066296b,ResourceVersion:4868546,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7e667 0xc001d7e668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7e6e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7e700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.981: INFO: Pod "nginx-deployment-7b8c6f4498-dkzwp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dkzwp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-dkzwp,UID:dcd99088-047d-4a09-8148-c4d52790af9c,ResourceVersion:4868541,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7e787 0xc001d7e788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7e800} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7e820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.981: INFO: Pod "nginx-deployment-7b8c6f4498-kk8lk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kk8lk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-kk8lk,UID:baff0b4d-c3b0-48c3-a6c4-bfb4eddc420c,ResourceVersion:4868383,Generation:0,CreationTimestamp:2020-08-14 10:32:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7e8a7 0xc001d7e8a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7e920} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7e940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.10,StartTime:2020-08-14 10:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-14 10:32:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c25b2d28243b19d98cf1f11e7a1a04a34aa795ce69388e124aec0cb262d3e1a8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.981: INFO: Pod "nginx-deployment-7b8c6f4498-lwvfz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lwvfz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-lwvfz,UID:84deb407-f8fd-422d-a1ee-e4cb71ecf775,ResourceVersion:4868577,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7ea17 0xc001d7ea18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7ea90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7eab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-14 10:32:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.981: INFO: Pod "nginx-deployment-7b8c6f4498-mqqsn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mqqsn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-mqqsn,UID:d184d323-1e1e-4005-ab64-60d2fcff05cd,ResourceVersion:4868406,Generation:0,CreationTimestamp:2020-08-14 10:32:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7eb77 0xc001d7eb78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7ebf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7ec10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.195,StartTime:2020-08-14 10:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-14 10:32:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b892d3f81f7de49d08ac5465846d865866a6477bd3603038e240224740e7a6f6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.982: INFO: Pod "nginx-deployment-7b8c6f4498-nmtvz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nmtvz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-nmtvz,UID:06a322d8-ba17-4862-abf7-0d8a84ee28c2,ResourceVersion:4868432,Generation:0,CreationTimestamp:2020-08-14 10:32:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7ece7 0xc001d7ece8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7ed60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7ed80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.199,StartTime:2020-08-14 10:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-14 10:32:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a6465662bdd533df120b6a71e2387811df16b684ad2d4398c8b9757c27108b13}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.982: INFO: Pod "nginx-deployment-7b8c6f4498-pmdl5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pmdl5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-pmdl5,UID:3ac56656-a00c-4aaa-9456-ca4a3122ce03,ResourceVersion:4868376,Generation:0,CreationTimestamp:2020-08-14 10:32:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7ee57 0xc001d7ee58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7eed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7eef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.9,StartTime:2020-08-14 10:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-14 10:32:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://66e0fb1d75af67a77731f946bf151a23899977d710b3d90df3eb3a568e5e4eeb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.982: INFO: Pod "nginx-deployment-7b8c6f4498-q5zd4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q5zd4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-q5zd4,UID:0799bbf2-8e29-472f-9261-576268a9c70e,ResourceVersion:4868548,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7efc7 0xc001d7efc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7f040} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7f060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.982: INFO: Pod "nginx-deployment-7b8c6f4498-r74xs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r74xs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-r74xs,UID:983c7759-ebb6-4c41-a333-2f37debaab1c,ResourceVersion:4868568,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7f0e7 0xc001d7f0e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7f160} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7f180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-14 10:32:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.983: INFO: Pod "nginx-deployment-7b8c6f4498-th25c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-th25c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-th25c,UID:b5fce3e5-fef6-4882-a59d-2d2b90746cdd,ResourceVersion:4868530,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7f247 0xc001d7f248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7f2c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7f2e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.983: INFO: Pod "nginx-deployment-7b8c6f4498-thzb6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-thzb6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-thzb6,UID:155bfc5d-4b5b-481e-93b0-0939b9c41012,ResourceVersion:4868429,Generation:0,CreationTimestamp:2020-08-14 10:32:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7f367 0xc001d7f368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7f3e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7f400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.197,StartTime:2020-08-14 10:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-14 10:32:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c3da3a6ac13fb3b728429784982529b6ca47b08a6a080210c0735b63499f3bd2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.983: INFO: Pod "nginx-deployment-7b8c6f4498-wcmxs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wcmxs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-wcmxs,UID:881fab21-ced1-4b4c-bf47-7f6b8163f1bd,ResourceVersion:4868549,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7f4d7 0xc001d7f4d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7f550} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7f570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 14 10:32:38.983: INFO: Pod "nginx-deployment-7b8c6f4498-wh8fj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wh8fj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3510,SelfLink:/api/v1/namespaces/deployment-3510/pods/nginx-deployment-7b8c6f4498-wh8fj,UID:6877b0d0-270f-4ff3-9894-1eb71399b1aa,ResourceVersion:4868529,Generation:0,CreationTimestamp:2020-08-14 10:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f668576-4972-4d2b-b0fd-b49c7fe0e4fd 0xc001d7f5f7 0xc001d7f5f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwlxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwlxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwlxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7f670} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7f690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:32:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:32:38.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3510" for this suite. Aug 14 10:33:16.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:33:16.342: INFO: namespace deployment-3510 deletion completed in 37.232235476s • [SLOW TEST:58.369 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:33:16.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4667/configmap-test-b0c534ad-5994-4a87-a1c6-ed0626eae3a5 STEP: Creating a pod to test consume configMaps Aug 14 10:33:16.838: INFO: Waiting up to 5m0s for pod "pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f" in namespace "configmap-4667" to be "success or failure" Aug 14 10:33:16.908: INFO: Pod "pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f": Phase="Pending", Reason="", readiness=false. Elapsed: 69.836803ms Aug 14 10:33:19.019: INFO: Pod "pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18070078s Aug 14 10:33:21.023: INFO: Pod "pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185067765s Aug 14 10:33:23.371: INFO: Pod "pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.533087152s Aug 14 10:33:25.557: INFO: Pod "pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f": Phase="Running", Reason="", readiness=true. Elapsed: 8.71896835s Aug 14 10:33:27.605: INFO: Pod "pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f": Phase="Running", Reason="", readiness=true. Elapsed: 10.766982131s Aug 14 10:33:29.609: INFO: Pod "pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f": Phase="Running", Reason="", readiness=true. Elapsed: 12.770946957s Aug 14 10:33:31.613: INFO: Pod "pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f": Phase="Running", Reason="", readiness=true. Elapsed: 14.77541995s Aug 14 10:33:33.618: INFO: Pod "pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.779881968s STEP: Saw pod success Aug 14 10:33:33.618: INFO: Pod "pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f" satisfied condition "success or failure" Aug 14 10:33:33.620: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f container env-test: STEP: delete the pod Aug 14 10:33:34.122: INFO: Waiting for pod pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f to disappear Aug 14 10:33:34.524: INFO: Pod pod-configmaps-21a21428-e3dc-4fc9-93f7-347fa76b046f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 14 10:33:34.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4667" for this suite. Aug 14 10:33:43.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 14 10:33:45.307: INFO: namespace configmap-4667 deletion completed in 10.779418455s • [SLOW TEST:28.965 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 14 10:33:45.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 14 10:33:46.619: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6875
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-6875
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6875
Aug 14 10:33:56.276: INFO: Found 0 stateful pods, waiting for 1
Aug 14 10:34:06.279: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 14 10:34:06.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 14 10:34:06.864: INFO: stderr: "I0814 10:34:06.553194     451 log.go:172] (0xc0001306e0) (0xc00077a640) Create stream\nI0814 10:34:06.553260     451 log.go:172] (0xc0001306e0) (0xc00077a640) Stream added, broadcasting: 1\nI0814 10:34:06.555908     451 log.go:172] (0xc0001306e0) Reply frame received for 1\nI0814 10:34:06.555951     451 log.go:172] (0xc0001306e0) (0xc00097a000) Create stream\nI0814 10:34:06.555962     451 log.go:172] (0xc0001306e0) (0xc00097a000) Stream added, broadcasting: 3\nI0814 10:34:06.556953     451 log.go:172] (0xc0001306e0) Reply frame received for 3\nI0814 10:34:06.557010     451 log.go:172] (0xc0001306e0) (0xc00077a6e0) Create stream\nI0814 10:34:06.557035     451 log.go:172] (0xc0001306e0) (0xc00077a6e0) Stream added, broadcasting: 5\nI0814 10:34:06.558048     451 log.go:172] (0xc0001306e0) Reply frame received for 5\nI0814 10:34:06.725455     451 log.go:172] (0xc0001306e0) Data frame received for 5\nI0814 10:34:06.725485     451 log.go:172] (0xc00077a6e0) (5) Data frame handling\nI0814 10:34:06.725504     451 log.go:172] (0xc00077a6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0814 10:34:06.851947     451 log.go:172] (0xc0001306e0) Data frame received for 3\nI0814 10:34:06.852005     451 log.go:172] (0xc00097a000) (3) Data frame handling\nI0814 10:34:06.852024     451 log.go:172] (0xc00097a000) (3) Data frame sent\nI0814 10:34:06.852036     451 log.go:172] (0xc0001306e0) Data frame received for 3\nI0814 10:34:06.852051     451 log.go:172] (0xc00097a000) (3) Data frame handling\nI0814 10:34:06.852105     451 log.go:172] (0xc0001306e0) Data frame received for 5\nI0814 10:34:06.852142     451 log.go:172] (0xc00077a6e0) (5) Data frame handling\nI0814 10:34:06.857040     451 log.go:172] (0xc0001306e0) Data frame received for 1\nI0814 10:34:06.857081     451 log.go:172] (0xc00077a640) (1) Data frame handling\nI0814 10:34:06.857115     451 log.go:172] (0xc00077a640) (1) Data frame sent\nI0814 10:34:06.857143     451 log.go:172] (0xc0001306e0) (0xc00077a640) Stream removed, broadcasting: 1\nI0814 10:34:06.857173     451 log.go:172] (0xc0001306e0) Go away received\nI0814 10:34:06.857563     451 log.go:172] (0xc0001306e0) (0xc00077a640) Stream removed, broadcasting: 1\nI0814 10:34:06.857585     451 log.go:172] (0xc0001306e0) (0xc00097a000) Stream removed, broadcasting: 3\nI0814 10:34:06.857594     451 log.go:172] (0xc0001306e0) (0xc00077a6e0) Stream removed, broadcasting: 5\n"
Aug 14 10:34:06.865: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 14 10:34:06.865: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 14 10:34:06.868: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 14 10:34:17.151: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 10:34:17.151: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 10:34:17.780: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 10:34:17.780: INFO: ss-0  iruya-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  }]
Aug 14 10:34:17.780: INFO: ss-1                Pending         []
Aug 14 10:34:17.780: INFO: 
Aug 14 10:34:17.780: INFO: StatefulSet ss has not reached scale 3, at 2
Aug 14 10:34:18.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.835836568s
Aug 14 10:34:20.229: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.806123448s
Aug 14 10:34:21.486: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.386628443s
Aug 14 10:34:22.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.129440698s
Aug 14 10:34:24.031: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.634609289s
Aug 14 10:34:25.042: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.584498358s
Aug 14 10:34:26.534: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.573970297s
Aug 14 10:34:27.618: INFO: Verifying statefulset ss doesn't scale past 3 for another 81.058364ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6875
Aug 14 10:34:28.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:34:28.930: INFO: stderr: "I0814 10:34:28.846059     472 log.go:172] (0xc000a6e420) (0xc000322820) Create stream\nI0814 10:34:28.846139     472 log.go:172] (0xc000a6e420) (0xc000322820) Stream added, broadcasting: 1\nI0814 10:34:28.850380     472 log.go:172] (0xc000a6e420) Reply frame received for 1\nI0814 10:34:28.850449     472 log.go:172] (0xc000a6e420) (0xc0006c2320) Create stream\nI0814 10:34:28.850475     472 log.go:172] (0xc000a6e420) (0xc0006c2320) Stream added, broadcasting: 3\nI0814 10:34:28.851414     472 log.go:172] (0xc000a6e420) Reply frame received for 3\nI0814 10:34:28.851452     472 log.go:172] (0xc000a6e420) (0xc000322000) Create stream\nI0814 10:34:28.851464     472 log.go:172] (0xc000a6e420) (0xc000322000) Stream added, broadcasting: 5\nI0814 10:34:28.852296     472 log.go:172] (0xc000a6e420) Reply frame received for 5\nI0814 10:34:28.923883     472 log.go:172] (0xc000a6e420) Data frame received for 3\nI0814 10:34:28.923942     472 log.go:172] (0xc0006c2320) (3) Data frame handling\nI0814 10:34:28.923964     472 log.go:172] (0xc0006c2320) (3) Data frame sent\nI0814 10:34:28.923980     472 log.go:172] (0xc000a6e420) Data frame received for 3\nI0814 10:34:28.924004     472 log.go:172] (0xc0006c2320) (3) Data frame handling\nI0814 10:34:28.924050     472 log.go:172] (0xc000a6e420) Data frame received for 5\nI0814 10:34:28.924075     472 log.go:172] (0xc000322000) (5) Data frame handling\nI0814 10:34:28.924100     472 log.go:172] (0xc000322000) (5) Data frame sent\nI0814 10:34:28.924113     472 log.go:172] (0xc000a6e420) Data frame received for 5\nI0814 10:34:28.924123     472 log.go:172] (0xc000322000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0814 10:34:28.925563     472 log.go:172] (0xc000a6e420) Data frame received for 1\nI0814 10:34:28.925624     472 log.go:172] (0xc000322820) (1) Data frame handling\nI0814 10:34:28.925647     472 log.go:172] (0xc000322820) (1) Data frame sent\nI0814 10:34:28.925658     472 log.go:172] (0xc000a6e420) (0xc000322820) Stream removed, broadcasting: 1\nI0814 10:34:28.925673     472 log.go:172] (0xc000a6e420) Go away received\nI0814 10:34:28.926084     472 log.go:172] (0xc000a6e420) (0xc000322820) Stream removed, broadcasting: 1\nI0814 10:34:28.926116     472 log.go:172] (0xc000a6e420) (0xc0006c2320) Stream removed, broadcasting: 3\nI0814 10:34:28.926129     472 log.go:172] (0xc000a6e420) (0xc000322000) Stream removed, broadcasting: 5\n"
Aug 14 10:34:28.931: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 14 10:34:28.931: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 14 10:34:28.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:34:29.179: INFO: stderr: "I0814 10:34:29.053141     493 log.go:172] (0xc0009c6420) (0xc0003da820) Create stream\nI0814 10:34:29.053305     493 log.go:172] (0xc0009c6420) (0xc0003da820) Stream added, broadcasting: 1\nI0814 10:34:29.058593     493 log.go:172] (0xc0009c6420) Reply frame received for 1\nI0814 10:34:29.058641     493 log.go:172] (0xc0009c6420) (0xc00064a1e0) Create stream\nI0814 10:34:29.058656     493 log.go:172] (0xc0009c6420) (0xc00064a1e0) Stream added, broadcasting: 3\nI0814 10:34:29.059474     493 log.go:172] (0xc0009c6420) Reply frame received for 3\nI0814 10:34:29.059499     493 log.go:172] (0xc0009c6420) (0xc0003da000) Create stream\nI0814 10:34:29.059508     493 log.go:172] (0xc0009c6420) (0xc0003da000) Stream added, broadcasting: 5\nI0814 10:34:29.060266     493 log.go:172] (0xc0009c6420) Reply frame received for 5\nI0814 10:34:29.152025     493 log.go:172] (0xc0009c6420) Data frame received for 5\nI0814 10:34:29.152055     493 log.go:172] (0xc0003da000) (5) Data frame handling\nI0814 10:34:29.152078     493 log.go:172] (0xc0003da000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0814 10:34:29.170899     493 log.go:172] (0xc0009c6420) Data frame received for 5\nI0814 10:34:29.170937     493 log.go:172] (0xc0003da000) (5) Data frame handling\nI0814 10:34:29.170949     493 log.go:172] (0xc0003da000) (5) Data frame sent\nI0814 10:34:29.170962     493 log.go:172] (0xc0009c6420) Data frame received for 5\nmv: can't rename '/tmp/index.html': No such file or directory\nI0814 10:34:29.170991     493 log.go:172] (0xc0003da000) (5) Data frame handling\nI0814 10:34:29.171029     493 log.go:172] (0xc0003da000) (5) Data frame sent\n+ true\nI0814 10:34:29.171074     493 log.go:172] (0xc0009c6420) Data frame received for 3\nI0814 10:34:29.171097     493 log.go:172] (0xc00064a1e0) (3) Data frame handling\nI0814 10:34:29.171123     493 log.go:172] (0xc00064a1e0) (3) Data frame sent\nI0814 10:34:29.171146     493 log.go:172] (0xc0009c6420) Data frame received for 3\nI0814 10:34:29.171158     493 log.go:172] (0xc00064a1e0) (3) Data frame handling\nI0814 10:34:29.171222     493 log.go:172] (0xc0009c6420) Data frame received for 5\nI0814 10:34:29.171241     493 log.go:172] (0xc0003da000) (5) Data frame handling\nI0814 10:34:29.173473     493 log.go:172] (0xc0009c6420) Data frame received for 1\nI0814 10:34:29.173499     493 log.go:172] (0xc0003da820) (1) Data frame handling\nI0814 10:34:29.173537     493 log.go:172] (0xc0003da820) (1) Data frame sent\nI0814 10:34:29.173568     493 log.go:172] (0xc0009c6420) (0xc0003da820) Stream removed, broadcasting: 1\nI0814 10:34:29.173589     493 log.go:172] (0xc0009c6420) Go away received\nI0814 10:34:29.173895     493 log.go:172] (0xc0009c6420) (0xc0003da820) Stream removed, broadcasting: 1\nI0814 10:34:29.173915     493 log.go:172] (0xc0009c6420) (0xc00064a1e0) Stream removed, broadcasting: 3\nI0814 10:34:29.173921     493 log.go:172] (0xc0009c6420) (0xc0003da000) Stream removed, broadcasting: 5\n"
Aug 14 10:34:29.180: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 14 10:34:29.180: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 14 10:34:29.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:34:29.377: INFO: stderr: "I0814 10:34:29.295253     514 log.go:172] (0xc000708c60) (0xc000690aa0) Create stream\nI0814 10:34:29.295301     514 log.go:172] (0xc000708c60) (0xc000690aa0) Stream added, broadcasting: 1\nI0814 10:34:29.302043     514 log.go:172] (0xc000708c60) Reply frame received for 1\nI0814 10:34:29.302286     514 log.go:172] (0xc000708c60) (0xc0006901e0) Create stream\nI0814 10:34:29.302306     514 log.go:172] (0xc000708c60) (0xc0006901e0) Stream added, broadcasting: 3\nI0814 10:34:29.303237     514 log.go:172] (0xc000708c60) Reply frame received for 3\nI0814 10:34:29.303269     514 log.go:172] (0xc000708c60) (0xc0001b0000) Create stream\nI0814 10:34:29.303282     514 log.go:172] (0xc000708c60) (0xc0001b0000) Stream added, broadcasting: 5\nI0814 10:34:29.304971     514 log.go:172] (0xc000708c60) Reply frame received for 5\nI0814 10:34:29.367128     514 log.go:172] (0xc000708c60) Data frame received for 5\nI0814 10:34:29.367189     514 log.go:172] (0xc0001b0000) (5) Data frame handling\nI0814 10:34:29.367211     514 log.go:172] (0xc0001b0000) (5) Data frame sent\nI0814 10:34:29.367231     514 log.go:172] (0xc000708c60) Data frame received for 5\nI0814 10:34:29.367239     514 log.go:172] (0xc0001b0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0814 10:34:29.367284     514 log.go:172] (0xc000708c60) Data frame received for 3\nI0814 10:34:29.367302     514 log.go:172] (0xc0006901e0) (3) Data frame handling\nI0814 10:34:29.367318     514 log.go:172] (0xc0006901e0) (3) Data frame sent\nI0814 10:34:29.367328     514 log.go:172] (0xc000708c60) Data frame received for 3\nI0814 10:34:29.367336     514 log.go:172] (0xc0006901e0) (3) Data frame handling\nI0814 10:34:29.368673     514 log.go:172] (0xc000708c60) Data frame received for 1\nI0814 10:34:29.368703     514 log.go:172] (0xc000690aa0) (1) Data frame handling\nI0814 10:34:29.368826     514 log.go:172] (0xc000690aa0) (1) Data frame sent\nI0814 10:34:29.368858     514 log.go:172] (0xc000708c60) (0xc000690aa0) Stream removed, broadcasting: 1\nI0814 10:34:29.369155     514 log.go:172] (0xc000708c60) Go away received\nI0814 10:34:29.369266     514 log.go:172] (0xc000708c60) (0xc000690aa0) Stream removed, broadcasting: 1\nI0814 10:34:29.369295     514 log.go:172] (0xc000708c60) (0xc0006901e0) Stream removed, broadcasting: 3\nI0814 10:34:29.369310     514 log.go:172] (0xc000708c60) (0xc0001b0000) Stream removed, broadcasting: 5\n"
Aug 14 10:34:29.377: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 14 10:34:29.377: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 14 10:34:29.381: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Aug 14 10:34:39.475: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 10:34:39.475: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 10:34:39.475: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 14 10:34:39.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 14 10:34:39.700: INFO: stderr: "I0814 10:34:39.610205     534 log.go:172] (0xc0009e0630) (0xc000272aa0) Create stream\nI0814 10:34:39.610258     534 log.go:172] (0xc0009e0630) (0xc000272aa0) Stream added, broadcasting: 1\nI0814 10:34:39.619387     534 log.go:172] (0xc0009e0630) Reply frame received for 1\nI0814 10:34:39.619689     534 log.go:172] (0xc0009e0630) (0xc000a16000) Create stream\nI0814 10:34:39.619738     534 log.go:172] (0xc0009e0630) (0xc000a16000) Stream added, broadcasting: 3\nI0814 10:34:39.621767     534 log.go:172] (0xc0009e0630) Reply frame received for 3\nI0814 10:34:39.621810     534 log.go:172] (0xc0009e0630) (0xc000a160a0) Create stream\nI0814 10:34:39.621823     534 log.go:172] (0xc0009e0630) (0xc000a160a0) Stream added, broadcasting: 5\nI0814 10:34:39.622702     534 log.go:172] (0xc0009e0630) Reply frame received for 5\nI0814 10:34:39.690744     534 log.go:172] (0xc0009e0630) Data frame received for 3\nI0814 10:34:39.690790     534 log.go:172] (0xc000a16000) (3) Data frame handling\nI0814 10:34:39.690806     534 log.go:172] (0xc000a16000) (3) Data frame sent\nI0814 10:34:39.690816     534 log.go:172] (0xc0009e0630) Data frame received for 3\nI0814 10:34:39.690825     534 log.go:172] (0xc000a16000) (3) Data frame handling\nI0814 10:34:39.690877     534 log.go:172] (0xc0009e0630) Data frame received for 5\nI0814 10:34:39.690910     534 log.go:172] (0xc000a160a0) (5) Data frame handling\nI0814 10:34:39.690931     534 log.go:172] (0xc000a160a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0814 10:34:39.690950     534 log.go:172] (0xc0009e0630) Data frame received for 5\nI0814 10:34:39.690980     534 log.go:172] (0xc000a160a0) (5) Data frame handling\nI0814 10:34:39.691773     534 log.go:172] (0xc0009e0630) Data frame received for 1\nI0814 10:34:39.691842     534 log.go:172] (0xc000272aa0) (1) Data frame handling\nI0814 10:34:39.691873     534 log.go:172] (0xc000272aa0) (1) Data frame sent\nI0814 10:34:39.691890     534 log.go:172] (0xc0009e0630) (0xc000272aa0) Stream removed, broadcasting: 1\nI0814 10:34:39.691910     534 log.go:172] (0xc0009e0630) Go away received\nI0814 10:34:39.692420     534 log.go:172] (0xc0009e0630) (0xc000272aa0) Stream removed, broadcasting: 1\nI0814 10:34:39.692446     534 log.go:172] (0xc0009e0630) (0xc000a16000) Stream removed, broadcasting: 3\nI0814 10:34:39.692458     534 log.go:172] (0xc0009e0630) (0xc000a160a0) Stream removed, broadcasting: 5\n"
Aug 14 10:34:39.700: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 14 10:34:39.700: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 14 10:34:39.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 14 10:34:39.986: INFO: stderr: "I0814 10:34:39.831635     556 log.go:172] (0xc000a74420) (0xc00059e820) Create stream\nI0814 10:34:39.831680     556 log.go:172] (0xc000a74420) (0xc00059e820) Stream added, broadcasting: 1\nI0814 10:34:39.834529     556 log.go:172] (0xc000a74420) Reply frame received for 1\nI0814 10:34:39.834554     556 log.go:172] (0xc000a74420) (0xc0005ba0a0) Create stream\nI0814 10:34:39.834561     556 log.go:172] (0xc000a74420) (0xc0005ba0a0) Stream added, broadcasting: 3\nI0814 10:34:39.835259     556 log.go:172] (0xc000a74420) Reply frame received for 3\nI0814 10:34:39.835280     556 log.go:172] (0xc000a74420) (0xc0005ba140) Create stream\nI0814 10:34:39.835287     556 log.go:172] (0xc000a74420) (0xc0005ba140) Stream added, broadcasting: 5\nI0814 10:34:39.835865     556 log.go:172] (0xc000a74420) Reply frame received for 5\nI0814 10:34:39.894535     556 log.go:172] (0xc000a74420) Data frame received for 5\nI0814 10:34:39.894567     556 log.go:172] (0xc0005ba140) (5) Data frame handling\nI0814 10:34:39.894581     556 log.go:172] (0xc0005ba140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0814 10:34:39.977127     556 log.go:172] (0xc000a74420) Data frame received for 3\nI0814 10:34:39.977176     556 log.go:172] (0xc0005ba0a0) (3) Data frame handling\nI0814 10:34:39.977202     556 log.go:172] (0xc0005ba0a0) (3) Data frame sent\nI0814 10:34:39.977214     556 log.go:172] (0xc000a74420) Data frame received for 3\nI0814 10:34:39.977222     556 log.go:172] (0xc0005ba0a0) (3) Data frame handling\nI0814 10:34:39.977239     556 log.go:172] (0xc000a74420) Data frame received for 5\nI0814 10:34:39.977250     556 log.go:172] (0xc0005ba140) (5) Data frame handling\nI0814 10:34:39.978704     556 log.go:172] (0xc000a74420) Data frame received for 1\nI0814 10:34:39.978730     556 log.go:172] (0xc00059e820) (1) Data frame handling\nI0814 10:34:39.978741     556 log.go:172] (0xc00059e820) (1) Data frame sent\nI0814 10:34:39.978750     556 log.go:172] (0xc000a74420) (0xc00059e820) Stream removed, broadcasting: 1\nI0814 10:34:39.978763     556 log.go:172] (0xc000a74420) Go away received\nI0814 10:34:39.979086     556 log.go:172] (0xc000a74420) (0xc00059e820) Stream removed, broadcasting: 1\nI0814 10:34:39.979101     556 log.go:172] (0xc000a74420) (0xc0005ba0a0) Stream removed, broadcasting: 3\nI0814 10:34:39.979108     556 log.go:172] (0xc000a74420) (0xc0005ba140) Stream removed, broadcasting: 5\n"
Aug 14 10:34:39.986: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 14 10:34:39.986: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 14 10:34:39.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 14 10:34:40.267: INFO: stderr: "I0814 10:34:40.110980     576 log.go:172] (0xc00092a6e0) (0xc0009106e0) Create stream\nI0814 10:34:40.111028     576 log.go:172] (0xc00092a6e0) (0xc0009106e0) Stream added, broadcasting: 1\nI0814 10:34:40.114826     576 log.go:172] (0xc00092a6e0) Reply frame received for 1\nI0814 10:34:40.114859     576 log.go:172] (0xc00092a6e0) (0xc00079ce60) Create stream\nI0814 10:34:40.114866     576 log.go:172] (0xc00092a6e0) (0xc00079ce60) Stream added, broadcasting: 3\nI0814 10:34:40.115609     576 log.go:172] (0xc00092a6e0) Reply frame received for 3\nI0814 10:34:40.115643     576 log.go:172] (0xc00092a6e0) (0xc000910000) Create stream\nI0814 10:34:40.115651     576 log.go:172] (0xc00092a6e0) (0xc000910000) Stream added, broadcasting: 5\nI0814 10:34:40.116466     576 log.go:172] (0xc00092a6e0) Reply frame received for 5\nI0814 10:34:40.198544     576 log.go:172] (0xc00092a6e0) Data frame received for 5\nI0814 10:34:40.198574     576 log.go:172] (0xc000910000) (5) Data frame handling\nI0814 10:34:40.198588     576 log.go:172] (0xc000910000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0814 10:34:40.257114     576 log.go:172] (0xc00092a6e0) Data frame received for 3\nI0814 10:34:40.257145     576 log.go:172] (0xc00079ce60) (3) Data frame handling\nI0814 10:34:40.257160     576 log.go:172] (0xc00079ce60) (3) Data frame sent\nI0814 10:34:40.257244     576 log.go:172] (0xc00092a6e0) Data frame received for 5\nI0814 10:34:40.257254     576 log.go:172] (0xc000910000) (5) Data frame handling\nI0814 10:34:40.257270     576 log.go:172] (0xc00092a6e0) Data frame received for 3\nI0814 10:34:40.257276     576 log.go:172] (0xc00079ce60) (3) Data frame handling\nI0814 10:34:40.260413     576 log.go:172] (0xc00092a6e0) Data frame received for 1\nI0814 10:34:40.260442     576 log.go:172] (0xc0009106e0) (1) Data frame handling\nI0814 10:34:40.260466     576 log.go:172] (0xc0009106e0) (1) Data frame sent\nI0814 10:34:40.260479     576 log.go:172] (0xc00092a6e0) (0xc0009106e0) Stream removed, broadcasting: 1\nI0814 10:34:40.260500     576 log.go:172] (0xc00092a6e0) Go away received\nI0814 10:34:40.261090     576 log.go:172] (0xc00092a6e0) (0xc0009106e0) Stream removed, broadcasting: 1\nI0814 10:34:40.261112     576 log.go:172] (0xc00092a6e0) (0xc00079ce60) Stream removed, broadcasting: 3\nI0814 10:34:40.261125     576 log.go:172] (0xc00092a6e0) (0xc000910000) Stream removed, broadcasting: 5\n"
Aug 14 10:34:40.267: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 14 10:34:40.267: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 14 10:34:40.267: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 10:34:40.270: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Aug 14 10:34:50.605: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 10:34:50.605: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 10:34:50.605: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 10:34:50.652: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 14 10:34:50.652: INFO: ss-0  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  }]
Aug 14 10:34:50.652: INFO: ss-1  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:50.652: INFO: ss-2  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:50.652: INFO: 
Aug 14 10:34:50.652: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 14 10:34:52.038: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 14 10:34:52.038: INFO: ss-0  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  }]
Aug 14 10:34:52.038: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:52.038: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:52.038: INFO: 
Aug 14 10:34:52.038: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 14 10:34:53.217: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 14 10:34:53.218: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  }]
Aug 14 10:34:53.218: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:53.218: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:53.218: INFO: 
Aug 14 10:34:53.218: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 14 10:34:54.763: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 14 10:34:54.763: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  }]
Aug 14 10:34:54.763: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:54.763: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:54.763: INFO: 
Aug 14 10:34:54.763: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 14 10:34:56.163: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 10:34:56.163: INFO: ss-0  iruya-worker  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  }]
Aug 14 10:34:56.163: INFO: ss-2  iruya-worker  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:56.163: INFO: 
Aug 14 10:34:56.163: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 14 10:34:57.199: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 10:34:57.199: INFO: ss-0  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  }]
Aug 14 10:34:57.199: INFO: ss-2  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:57.199: INFO: 
Aug 14 10:34:57.199: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 14 10:34:58.407: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 10:34:58.407: INFO: ss-0  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  }]
Aug 14 10:34:58.407: INFO: ss-2  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:58.407: INFO: 
Aug 14 10:34:58.407: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 14 10:34:59.412: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 10:34:59.412: INFO: ss-0  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  }]
Aug 14 10:34:59.412: INFO: ss-2  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:34:59.412: INFO: 
Aug 14 10:34:59.412: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 14 10:35:00.416: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 10:35:00.416: INFO: ss-0  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:33:56 +0000 UTC  }]
Aug 14 10:35:00.416: INFO: ss-2  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:34:17 +0000 UTC  }]
Aug 14 10:35:00.416: INFO: 
Aug 14 10:35:00.416: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6875
Aug 14 10:35:01.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:35:01.547: INFO: rc: 1
Aug 14 10:35:01.547: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc003007680 exit status 1   true [0xc0026d4200 0xc0026d4248 0xc0026d4290] [0xc0026d4200 0xc0026d4248 0xc0026d4290] [0xc0026d4238 0xc0026d4270] [0xba7140 0xba7140] 0xc0027c2480 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Aug 14 10:35:11.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:35:12.209: INFO: rc: 1
Aug 14 10:35:12.209: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0025528d0 exit status 1   true [0xc0009f55d8 0xc0009f5678 0xc0009f56d8] [0xc0009f55d8 0xc0009f5678 0xc0009f56d8] [0xc0009f5610 0xc0009f56d0] [0xba7140 0xba7140] 0xc0030d6ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:35:22.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:35:22.306: INFO: rc: 1
Aug 14 10:35:22.306: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127d230 exit status 1   true [0xc001294078 0xc001294090 0xc0012940a8] [0xc001294078 0xc001294090 0xc0012940a8] [0xc001294088 0xc0012940a0] [0xba7140 0xba7140] 0xc001d146c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:35:32.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:35:32.571: INFO: rc: 1
Aug 14 10:35:32.571: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127d2f0 exit status 1   true [0xc0012940b0 0xc0012940c8 0xc0012940e0] [0xc0012940b0 0xc0012940c8 0xc0012940e0] [0xc0012940c0 0xc0012940d8] [0xba7140 0xba7140] 0xc001d15380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:35:42.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:35:42.667: INFO: rc: 1
Aug 14 10:35:42.667: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127d3b0 exit status 1   true [0xc0012940e8 0xc001294100 0xc001294118] [0xc0012940e8 0xc001294100 0xc001294118] [0xc0012940f8 0xc001294110] [0xba7140 0xba7140] 0xc001d15bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:35:52.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:35:52.772: INFO: rc: 1
Aug 14 10:35:52.772: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003007740 exit status 1   true [0xc0026d4298 0xc0026d42b8 0xc0026d4300] [0xc0026d4298 0xc0026d42b8 0xc0026d4300] [0xc0026d42a8 0xc0026d42f0] [0xba7140 0xba7140] 0xc0027c2f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:36:02.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:36:02.878: INFO: rc: 1
Aug 14 10:36:02.878: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c822d0 exit status 1   true [0xc00038a038 0xc00038a138 0xc00038a158] [0xc00038a038 0xc00038a138 0xc00038a158] [0xc00038a130 0xc00038a148] [0xba7140 0xba7140] 0xc0022994a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:36:12.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:36:17.052: INFO: rc: 1
Aug 14 10:36:17.052: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127d470 exit status 1   true [0xc001294120 0xc001294138 0xc001294150] [0xc001294120 0xc001294138 0xc001294150] [0xc001294130 0xc001294148] [0xba7140 0xba7140] 0xc0019b6fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:36:27.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:36:27.489: INFO: rc: 1
Aug 14 10:36:27.490: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0025529f0 exit status 1   true [0xc0009f56e8 0xc0009f57b0 0xc0009f5828] [0xc0009f56e8 0xc0009f57b0 0xc0009f5828] [0xc0009f5788 0xc0009f57d0] [0xba7140 0xba7140] 0xc0030d71a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:36:37.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:36:37.587: INFO: rc: 1
Aug 14 10:36:37.587: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127d560 exit status 1   true [0xc001294158 0xc001294170 0xc001294188] [0xc001294158 0xc001294170 0xc001294188] [0xc001294168 0xc001294180] [0xba7140 0xba7140] 0xc00216be60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:36:47.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:36:47.682: INFO: rc: 1
Aug 14 10:36:47.682: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002698090 exit status 1   true [0xc000011670 0xc000011cb0 0xc001294008] [0xc000011670 0xc000011cb0 0xc001294008] [0xc000011bd0 0xc001294000] [0xba7140 0xba7140] 0xc0019b6840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:36:57.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:36:57.788: INFO: rc: 1
Aug 14 10:36:57.789: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b00c0 exit status 1   true [0xc0009f4068 0xc0009f41e0 0xc0009f4390] [0xc0009f4068 0xc0009f41e0 0xc0009f4390] [0xc0009f4118 0xc0009f4360] [0xba7140 0xba7140] 0xc001d14a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:37:07.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:37:07.889: INFO: rc: 1
Aug 14 10:37:07.889: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127c090 exit status 1   true [0xc0026d4000 0xc0026d4018 0xc0026d4030] [0xc0026d4000 0xc0026d4018 0xc0026d4030] [0xc0026d4010 0xc0026d4028] [0xba7140 0xba7140] 0xc00204f0e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:37:17.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:37:17.987: INFO: rc: 1
Aug 14 10:37:17.987: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127c150 exit status 1   true [0xc0026d4038 0xc0026d4058 0xc0026d4070] [0xc0026d4038 0xc0026d4058 0xc0026d4070] [0xc0026d4050 0xc0026d4068] [0xba7140 0xba7140] 0xc00275e0c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:37:27.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:37:28.092: INFO: rc: 1
Aug 14 10:37:28.093: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b01e0 exit status 1   true [0xc0009f44e8 0xc0009f4b48 0xc0009f4ba0] [0xc0009f44e8 0xc0009f4b48 0xc0009f4ba0] [0xc0009f4af0 0xc0009f4b60] [0xba7140 0xba7140] 0xc001d155c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:37:38.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:37:38.262: INFO: rc: 1
Aug 14 10:37:38.262: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0025520c0 exit status 1   true [0xc00038a000 0xc00038a130 0xc00038a148] [0xc00038a000 0xc00038a130 0xc00038a148] [0xc00038a0a8 0xc00038a140] [0xba7140 0xba7140] 0xc0030d6240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:37:48.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:37:48.361: INFO: rc: 1
Aug 14 10:37:48.361: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b02d0 exit status 1   true [0xc0009f4bd8 0xc0009f4d20 0xc0009f4e30] [0xc0009f4bd8 0xc0009f4d20 0xc0009f4e30] [0xc0009f4cf8 0xc0009f4e08] [0xba7140 0xba7140] 0xc001d15f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:37:58.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:37:58.473: INFO: rc: 1
Aug 14 10:37:58.473: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127c210 exit status 1   true [0xc0026d4078 0xc0026d4090 0xc0026d40a8] [0xc0026d4078 0xc0026d4090 0xc0026d40a8] [0xc0026d4088 0xc0026d40a0] [0xba7140 0xba7140] 0xc00275e3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:38:08.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:38:08.571: INFO: rc: 1
Aug 14 10:38:08.571: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127c2d0 exit status 1   true [0xc0026d40b0 0xc0026d40d0 0xc0026d40e8] [0xc0026d40b0 0xc0026d40d0 0xc0026d40e8] [0xc0026d40c8 0xc0026d40e0] [0xba7140 0xba7140] 0xc00275f680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:38:18.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:38:18.673: INFO: rc: 1
Aug 14 10:38:18.673: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127c3c0 exit status 1   true [0xc0026d4100 0xc0026d4148 0xc0026d4178] [0xc0026d4100 0xc0026d4148 0xc0026d4178] [0xc0026d4130 0xc0026d4168] [0xba7140 0xba7140] 0xc00275f980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:38:28.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:38:28.777: INFO: rc: 1
Aug 14 10:38:28.777: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002552210 exit status 1   true [0xc00038a158 0xc00038a180 0xc00038a1a0] [0xc00038a158 0xc00038a180 0xc00038a1a0] [0xc00038a178 0xc00038a190] [0xba7140 0xba7140] 0xc0030d6540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:38:38.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:38:38.872: INFO: rc: 1
Aug 14 10:38:38.872: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b0090 exit status 1   true [0xc000011670 0xc000011cb0 0xc0009f4098] [0xc000011670 0xc000011cb0 0xc0009f4098] [0xc000011bd0 0xc0009f4068] [0xba7140 0xba7140] 0xc00204f0e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:38:48.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:38:48.970: INFO: rc: 1
Aug 14 10:38:48.971: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028b01b0 exit status 1   true [0xc0009f4118 0xc0009f4360 0xc0009f4a88] [0xc0009f4118 0xc0009f4360 0xc0009f4a88] [0xc0009f4288 0xc0009f44e8] [0xba7140 0xba7140] 0xc001d142a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:38:58.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:38:59.076: INFO: rc: 1
Aug 14 10:38:59.077: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127c0c0 exit status 1   true [0xc0026d4000 0xc0026d4018 0xc0026d4030] [0xc0026d4000 0xc0026d4018 0xc0026d4030] [0xc0026d4010 0xc0026d4028] [0xba7140 0xba7140] 0xc0011d00c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:39:09.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:39:09.170: INFO: rc: 1
Aug 14 10:39:09.170: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127c1b0 exit status 1   true [0xc0026d4038 0xc0026d4058 0xc0026d4070] [0xc0026d4038 0xc0026d4058 0xc0026d4070] [0xc0026d4050 0xc0026d4068] [0xba7140 0xba7140] 0xc0011d0600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:39:19.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:39:19.268: INFO: rc: 1
Aug 14 10:39:19.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00127c2a0 exit status 1   true [0xc0026d4078 0xc0026d4090 0xc0026d40a8] [0xc0026d4078 0xc0026d4090 0xc0026d40a8] [0xc0026d4088 0xc0026d40a0] [0xba7140 0xba7140] 0xc0011d0900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:39:29.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:39:29.542: INFO: rc: 1
Aug 14 10:39:29.542: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002552090 exit status 1   true [0xc001294000 0xc001294018 0xc001294030] [0xc001294000 0xc001294018 0xc001294030] [0xc001294010 0xc001294028] [0xba7140 0xba7140] 0xc00275e240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:39:39.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:39:39.635: INFO: rc: 1
Aug 14 10:39:39.635: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002698150 exit status 1   true [0xc00038a000 0xc00038a130 0xc00038a148] [0xc00038a000 0xc00038a130 0xc00038a148] [0xc00038a0a8 0xc00038a140] [0xba7140 0xba7140] 0xc0019b7320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:39:49.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:39:49.963: INFO: rc: 1
Aug 14 10:39:49.963: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002552180 exit status 1   true [0xc001294038 0xc001294050 0xc001294068] [0xc001294038 0xc001294050 0xc001294068] [0xc001294048 0xc001294060] [0xba7140 0xba7140] 0xc00275f500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:39:59.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:40:00.597: INFO: rc: 1
Aug 14 10:40:00.597: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0025522a0 exit status 1   true [0xc001294070 0xc001294088 0xc0012940a0] [0xc001294070 0xc001294088 0xc0012940a0] [0xc001294080 0xc001294098] [0xba7140 0xba7140] 0xc00275f800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 14 10:40:10.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 10:40:10.693: INFO: rc: 1
Aug 14 10:40:10.693: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Aug 14 10:40:10.693: INFO: Scaling statefulset ss to 0
Aug 14 10:40:10.701: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 14 10:40:10.704: INFO: Deleting all statefulset in ns statefulset-6875
Aug 14 10:40:10.706: INFO: Scaling statefulset ss to 0
Aug 14 10:40:10.714: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 10:40:10.716: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:40:10.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6875" for this suite.
Aug 14 10:40:16.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:40:17.007: INFO: namespace statefulset-6875 deletion completed in 6.253347657s

• [SLOW TEST:381.436 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:40:17.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-3282
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3282 to expose endpoints map[]
Aug 14 10:40:17.214: INFO: Get endpoints failed (34.510776ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 14 10:40:18.219: INFO: successfully validated that service endpoint-test2 in namespace services-3282 exposes endpoints map[] (1.038767471s elapsed)
STEP: Creating pod pod1 in namespace services-3282
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3282 to expose endpoints map[pod1:[80]]
Aug 14 10:40:23.264: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.039458007s elapsed, will retry)
Aug 14 10:40:24.270: INFO: successfully validated that service endpoint-test2 in namespace services-3282 exposes endpoints map[pod1:[80]] (6.046180165s elapsed)
STEP: Creating pod pod2 in namespace services-3282
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3282 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 14 10:40:28.537: INFO: Unexpected endpoints: found map[71569783-e7f1-4226-b0d7-6d1576709d4a:[80]], expected map[pod1:[80] pod2:[80]] (4.262882983s elapsed, will retry)
Aug 14 10:40:29.551: INFO: successfully validated that service endpoint-test2 in namespace services-3282 exposes endpoints map[pod1:[80] pod2:[80]] (5.276806518s elapsed)
STEP: Deleting pod pod1 in namespace services-3282
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3282 to expose endpoints map[pod2:[80]]
Aug 14 10:40:30.609: INFO: successfully validated that service endpoint-test2 in namespace services-3282 exposes endpoints map[pod2:[80]] (1.054964625s elapsed)
STEP: Deleting pod pod2 in namespace services-3282
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3282 to expose endpoints map[]
Aug 14 10:40:30.654: INFO: successfully validated that service endpoint-test2 in namespace services-3282 exposes endpoints map[] (40.881804ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:40:31.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3282" for this suite.
Aug 14 10:40:53.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:40:54.011: INFO: namespace services-3282 deletion completed in 22.452272659s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:37.004 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:40:54.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 14 10:40:54.122: INFO: Waiting up to 5m0s for pod "pod-ceb12b7a-6013-4172-987e-351fedd00f63" in namespace "emptydir-9802" to be "success or failure"
Aug 14 10:40:54.209: INFO: Pod "pod-ceb12b7a-6013-4172-987e-351fedd00f63": Phase="Pending", Reason="", readiness=false. Elapsed: 87.223801ms
Aug 14 10:40:56.241: INFO: Pod "pod-ceb12b7a-6013-4172-987e-351fedd00f63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118915687s
Aug 14 10:40:58.244: INFO: Pod "pod-ceb12b7a-6013-4172-987e-351fedd00f63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122787627s
Aug 14 10:41:00.249: INFO: Pod "pod-ceb12b7a-6013-4172-987e-351fedd00f63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127287057s
Aug 14 10:41:02.343: INFO: Pod "pod-ceb12b7a-6013-4172-987e-351fedd00f63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.221087199s
STEP: Saw pod success
Aug 14 10:41:02.343: INFO: Pod "pod-ceb12b7a-6013-4172-987e-351fedd00f63" satisfied condition "success or failure"
Aug 14 10:41:02.346: INFO: Trying to get logs from node iruya-worker2 pod pod-ceb12b7a-6013-4172-987e-351fedd00f63 container test-container: 
STEP: delete the pod
Aug 14 10:41:02.583: INFO: Waiting for pod pod-ceb12b7a-6013-4172-987e-351fedd00f63 to disappear
Aug 14 10:41:02.611: INFO: Pod pod-ceb12b7a-6013-4172-987e-351fedd00f63 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:41:02.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9802" for this suite.
Aug 14 10:41:08.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:41:08.718: INFO: namespace emptydir-9802 deletion completed in 6.103410827s

• [SLOW TEST:14.707 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:41:08.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 10:41:09.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9579'
Aug 14 10:41:10.427: INFO: stderr: ""
Aug 14 10:41:10.427: INFO: stdout: "replicationcontroller/redis-master created\n"
Aug 14 10:41:10.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9579'
Aug 14 10:41:11.335: INFO: stderr: ""
Aug 14 10:41:11.335: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 14 10:41:12.372: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 10:41:12.373: INFO: Found 0 / 1
Aug 14 10:41:13.339: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 10:41:13.339: INFO: Found 0 / 1
Aug 14 10:41:14.433: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 10:41:14.433: INFO: Found 0 / 1
Aug 14 10:41:15.340: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 10:41:15.340: INFO: Found 0 / 1
Aug 14 10:41:16.340: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 10:41:16.340: INFO: Found 0 / 1
Aug 14 10:41:17.340: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 10:41:17.340: INFO: Found 1 / 1
Aug 14 10:41:17.340: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 14 10:41:17.343: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 10:41:17.343: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 14 10:41:17.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-gs5kl --namespace=kubectl-9579'
Aug 14 10:41:17.460: INFO: stderr: ""
Aug 14 10:41:17.460: INFO: stdout: "Name:           redis-master-gs5kl\nNamespace:      kubectl-9579\nPriority:       0\nNode:           iruya-worker/172.18.0.5\nStart Time:     Fri, 14 Aug 2020 10:41:10 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.1.29\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://f9d05635e90f5df4e23ed831f67b4914f34aae504595a08e3aef14673427b063\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 14 Aug 2020 10:41:15 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-48ksg (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-48ksg:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-48ksg\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  7s    default-scheduler      Successfully assigned kubectl-9579/redis-master-gs5kl to iruya-worker\n  Normal  Pulled     4s    kubelet, iruya-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-worker  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-worker  Started container redis-master\n"
Aug 14 10:41:17.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9579'
Aug 14 10:41:17.581: INFO: stderr: ""
Aug 14 10:41:17.581: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-9579\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: redis-master-gs5kl\n"
Aug 14 10:41:17.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9579'
Aug 14 10:41:17.673: INFO: stderr: ""
Aug 14 10:41:17.673: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-9579\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.101.212.168\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.29:6379\nSession Affinity:  None\nEvents:            \n"
Aug 14 10:41:17.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Aug 14 10:41:17.787: INFO: stderr: ""
Aug 14 10:41:17.787: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 19 Jul 2020 21:15:33 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Fri, 14 Aug 2020 10:40:53 +0000   Sun, 19 Jul 2020 21:15:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Fri, 14 Aug 2020 10:40:53 +0000   Sun, 19 Jul 2020 21:15:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Fri, 14 Aug 2020 10:40:53 +0000   Sun, 19 Jul 2020 21:15:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Fri, 14 Aug 2020 10:40:53 +0000   Sun, 19 Jul 2020 21:16:03 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.9\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 ca83ac9a93d54502bb9afb972c3f1f0b\n System UUID:                1d4ac873-683f-4805-8579-15bbb4e4df77\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 20.04 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version:            v1.15.12\n Kube-Proxy Version:         v1.15.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-clz9n                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     25d\n  kube-system                coredns-5d4dd4b4db-w42x4                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     25d\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         25d\n  kube-system                kindnet-xbjsm                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      25d\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         25d\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         25d\n  kube-system                kube-proxy-nwhvb                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25d\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         25d\n  local-path-storage         local-path-provisioner-668779bd7-sf66r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 14 10:41:17.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9579'
Aug 14 10:41:17.880: INFO: stderr: ""
Aug 14 10:41:17.880: INFO: stdout: "Name:         kubectl-9579\nLabels:       e2e-framework=kubectl\n              e2e-run=3a05f229-9ce6-45f5-8825-089b8b271804\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:41:17.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9579" for this suite.
Aug 14 10:41:39.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:41:40.261: INFO: namespace kubectl-9579 deletion completed in 22.374596683s

• [SLOW TEST:31.543 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:41:40.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 14 10:41:46.598: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-b6f4fbe2-180d-4a27-a5eb-478ac56bff3b,GenerateName:,Namespace:events-6467,SelfLink:/api/v1/namespaces/events-6467/pods/send-events-b6f4fbe2-180d-4a27-a5eb-478ac56bff3b,UID:d56ba7b0-50a8-426a-a79b-bd1ba5c69439,ResourceVersion:4870251,Generation:0,CreationTimestamp:2020-08-14 10:41:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 506314956,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dfc5m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dfc5m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-dfc5m true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030ec320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030ec340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:41:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:41:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 10:41:40 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.217,StartTime:2020-08-14 10:41:40 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-14 10:41:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://68a3cfa7daeb60ad6cdf98bb822005b710e53598dd201f704474081a57308e38}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Aug 14 10:41:48.603: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 14 10:41:50.609: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:41:50.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6467" for this suite.
Aug 14 10:42:28.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:42:28.721: INFO: namespace events-6467 deletion completed in 38.100705738s

• [SLOW TEST:48.460 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:42:28.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0814 10:43:09.249747       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 14 10:43:09.249: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:43:09.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7845" for this suite.
Aug 14 10:43:19.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:43:20.012: INFO: namespace gc-7845 deletion completed in 10.760180389s

• [SLOW TEST:51.290 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:43:20.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 10:43:20.683: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fed9da1-c625-4a7b-86cc-bd566c154880" in namespace "downward-api-36" to be "success or failure"
Aug 14 10:43:20.836: INFO: Pod "downwardapi-volume-5fed9da1-c625-4a7b-86cc-bd566c154880": Phase="Pending", Reason="", readiness=false. Elapsed: 152.534877ms
Aug 14 10:43:23.171: INFO: Pod "downwardapi-volume-5fed9da1-c625-4a7b-86cc-bd566c154880": Phase="Pending", Reason="", readiness=false. Elapsed: 2.487879212s
Aug 14 10:43:25.175: INFO: Pod "downwardapi-volume-5fed9da1-c625-4a7b-86cc-bd566c154880": Phase="Pending", Reason="", readiness=false. Elapsed: 4.491588581s
Aug 14 10:43:27.179: INFO: Pod "downwardapi-volume-5fed9da1-c625-4a7b-86cc-bd566c154880": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.4960041s
STEP: Saw pod success
Aug 14 10:43:27.179: INFO: Pod "downwardapi-volume-5fed9da1-c625-4a7b-86cc-bd566c154880" satisfied condition "success or failure"
Aug 14 10:43:27.182: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5fed9da1-c625-4a7b-86cc-bd566c154880 container client-container: 
STEP: delete the pod
Aug 14 10:43:27.254: INFO: Waiting for pod downwardapi-volume-5fed9da1-c625-4a7b-86cc-bd566c154880 to disappear
Aug 14 10:43:27.259: INFO: Pod downwardapi-volume-5fed9da1-c625-4a7b-86cc-bd566c154880 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:43:27.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-36" for this suite.
Aug 14 10:43:33.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:43:33.346: INFO: namespace downward-api-36 deletion completed in 6.084851694s

• [SLOW TEST:13.334 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:43:33.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 10:43:33.472: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e102f4d6-9c24-4d9f-8178-ae64805cef8c" in namespace "projected-1558" to be "success or failure"
Aug 14 10:43:33.474: INFO: Pod "downwardapi-volume-e102f4d6-9c24-4d9f-8178-ae64805cef8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.553503ms
Aug 14 10:43:35.553: INFO: Pod "downwardapi-volume-e102f4d6-9c24-4d9f-8178-ae64805cef8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081436704s
Aug 14 10:43:37.585: INFO: Pod "downwardapi-volume-e102f4d6-9c24-4d9f-8178-ae64805cef8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112694554s
Aug 14 10:43:39.588: INFO: Pod "downwardapi-volume-e102f4d6-9c24-4d9f-8178-ae64805cef8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.116445383s
STEP: Saw pod success
Aug 14 10:43:39.588: INFO: Pod "downwardapi-volume-e102f4d6-9c24-4d9f-8178-ae64805cef8c" satisfied condition "success or failure"
Aug 14 10:43:39.591: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e102f4d6-9c24-4d9f-8178-ae64805cef8c container client-container: 
STEP: delete the pod
Aug 14 10:43:39.628: INFO: Waiting for pod downwardapi-volume-e102f4d6-9c24-4d9f-8178-ae64805cef8c to disappear
Aug 14 10:43:39.654: INFO: Pod downwardapi-volume-e102f4d6-9c24-4d9f-8178-ae64805cef8c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:43:39.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1558" for this suite.
Aug 14 10:43:45.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:43:45.750: INFO: namespace projected-1558 deletion completed in 6.092368316s

• [SLOW TEST:12.403 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:43:45.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4042.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4042.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4042.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4042.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 14 10:43:55.896: INFO: DNS probes using dns-test-46753c59-cc53-43be-a7ff-d27d34e832ed succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4042.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4042.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4042.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4042.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 14 10:44:04.639: INFO: File wheezy_udp@dns-test-service-3.dns-4042.svc.cluster.local from pod  dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 14 10:44:04.643: INFO: File jessie_udp@dns-test-service-3.dns-4042.svc.cluster.local from pod  dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 14 10:44:04.643: INFO: Lookups using dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 failed for: [wheezy_udp@dns-test-service-3.dns-4042.svc.cluster.local jessie_udp@dns-test-service-3.dns-4042.svc.cluster.local]

Aug 14 10:44:09.648: INFO: File wheezy_udp@dns-test-service-3.dns-4042.svc.cluster.local from pod  dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 14 10:44:09.651: INFO: File jessie_udp@dns-test-service-3.dns-4042.svc.cluster.local from pod  dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 14 10:44:09.651: INFO: Lookups using dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 failed for: [wheezy_udp@dns-test-service-3.dns-4042.svc.cluster.local jessie_udp@dns-test-service-3.dns-4042.svc.cluster.local]

Aug 14 10:44:14.648: INFO: File wheezy_udp@dns-test-service-3.dns-4042.svc.cluster.local from pod  dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 14 10:44:14.652: INFO: File jessie_udp@dns-test-service-3.dns-4042.svc.cluster.local from pod  dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 14 10:44:14.652: INFO: Lookups using dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 failed for: [wheezy_udp@dns-test-service-3.dns-4042.svc.cluster.local jessie_udp@dns-test-service-3.dns-4042.svc.cluster.local]

Aug 14 10:44:19.689: INFO: File wheezy_udp@dns-test-service-3.dns-4042.svc.cluster.local from pod  dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 14 10:44:19.692: INFO: File jessie_udp@dns-test-service-3.dns-4042.svc.cluster.local from pod  dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 14 10:44:19.692: INFO: Lookups using dns-4042/dns-test-25c26833-8640-459a-8205-4219e801b726 failed for: [wheezy_udp@dns-test-service-3.dns-4042.svc.cluster.local jessie_udp@dns-test-service-3.dns-4042.svc.cluster.local]

Aug 14 10:44:24.652: INFO: DNS probes using dns-test-25c26833-8640-459a-8205-4219e801b726 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4042.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4042.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4042.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4042.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 14 10:44:33.905: INFO: DNS probes using dns-test-055f2687-b0e8-4f79-a1f7-c19b0222adf5 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:44:33.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4042" for this suite.
Aug 14 10:44:46.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:44:46.092: INFO: namespace dns-4042 deletion completed in 12.09462586s

• [SLOW TEST:60.342 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:44:46.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Aug 14 10:44:46.859: INFO: created pod pod-service-account-defaultsa
Aug 14 10:44:46.859: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 14 10:44:46.929: INFO: created pod pod-service-account-mountsa
Aug 14 10:44:46.929: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 14 10:44:47.017: INFO: created pod pod-service-account-nomountsa
Aug 14 10:44:47.017: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 14 10:44:47.310: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 14 10:44:47.310: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 14 10:44:47.387: INFO: created pod pod-service-account-mountsa-mountspec
Aug 14 10:44:47.387: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 14 10:44:47.402: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 14 10:44:47.402: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 14 10:44:47.633: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 14 10:44:47.633: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 14 10:44:47.637: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 14 10:44:47.637: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 14 10:44:47.718: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 14 10:44:47.718: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:44:47.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5056" for this suite.
Aug 14 10:45:28.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:45:28.513: INFO: namespace svcaccounts-5056 deletion completed in 40.714811925s

• [SLOW TEST:42.420 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:45:28.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 10:45:29.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c91ca2be-617f-4713-975d-a254a8c86508" in namespace "projected-7160" to be "success or failure"
Aug 14 10:45:29.133: INFO: Pod "downwardapi-volume-c91ca2be-617f-4713-975d-a254a8c86508": Phase="Pending", Reason="", readiness=false. Elapsed: 24.828791ms
Aug 14 10:45:31.137: INFO: Pod "downwardapi-volume-c91ca2be-617f-4713-975d-a254a8c86508": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028826205s
Aug 14 10:45:34.318: INFO: Pod "downwardapi-volume-c91ca2be-617f-4713-975d-a254a8c86508": Phase="Pending", Reason="", readiness=false. Elapsed: 5.209614009s
Aug 14 10:45:36.331: INFO: Pod "downwardapi-volume-c91ca2be-617f-4713-975d-a254a8c86508": Phase="Pending", Reason="", readiness=false. Elapsed: 7.222718233s
Aug 14 10:45:38.336: INFO: Pod "downwardapi-volume-c91ca2be-617f-4713-975d-a254a8c86508": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.226872262s
STEP: Saw pod success
Aug 14 10:45:38.336: INFO: Pod "downwardapi-volume-c91ca2be-617f-4713-975d-a254a8c86508" satisfied condition "success or failure"
Aug 14 10:45:38.339: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c91ca2be-617f-4713-975d-a254a8c86508 container client-container: 
STEP: delete the pod
Aug 14 10:45:38.426: INFO: Waiting for pod downwardapi-volume-c91ca2be-617f-4713-975d-a254a8c86508 to disappear
Aug 14 10:45:38.433: INFO: Pod downwardapi-volume-c91ca2be-617f-4713-975d-a254a8c86508 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:45:38.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7160" for this suite.
Aug 14 10:45:44.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:45:45.433: INFO: namespace projected-7160 deletion completed in 6.996590645s

• [SLOW TEST:16.920 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:45:45.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 14 10:45:53.263: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:45:55.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9350" for this suite.
Aug 14 10:46:25.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:46:26.901: INFO: namespace replicaset-9350 deletion completed in 31.564920473s

• [SLOW TEST:41.467 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:46:26.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 14 10:46:27.669: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:46:57.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3541" for this suite.
Aug 14 10:47:09.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:47:09.198: INFO: namespace init-container-3541 deletion completed in 10.85070857s

• [SLOW TEST:42.298 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:47:09.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 14 10:47:27.418: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 14 10:47:28.175: INFO: Pod pod-with-prestop-http-hook still exists
Aug 14 10:47:30.175: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 14 10:47:30.179: INFO: Pod pod-with-prestop-http-hook still exists
Aug 14 10:47:32.175: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 14 10:47:32.179: INFO: Pod pod-with-prestop-http-hook still exists
Aug 14 10:47:34.175: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 14 10:47:34.179: INFO: Pod pod-with-prestop-http-hook still exists
Aug 14 10:47:36.175: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 14 10:47:36.179: INFO: Pod pod-with-prestop-http-hook still exists
Aug 14 10:47:38.175: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 14 10:47:38.720: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:47:38.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8708" for this suite.
Aug 14 10:48:05.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:48:05.298: INFO: namespace container-lifecycle-hook-8708 deletion completed in 26.134592493s

• [SLOW TEST:56.100 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:48:05.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-35490891-95b3-4d85-bc2b-04f372cd917f
STEP: Creating configMap with name cm-test-opt-upd-b2b2e4e5-116a-496f-8021-6be4bd0f33a1
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-35490891-95b3-4d85-bc2b-04f372cd917f
STEP: Updating configmap cm-test-opt-upd-b2b2e4e5-116a-496f-8021-6be4bd0f33a1
STEP: Creating configMap with name cm-test-opt-create-6929c3d7-c96f-4626-8e6c-0d0a32551dbd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:48:17.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8275" for this suite.
Aug 14 10:48:42.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:48:42.288: INFO: namespace projected-8275 deletion completed in 24.51255051s

• [SLOW TEST:36.989 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:48:42.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 10:48:42.435: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c4709d1-e44c-40c7-90fb-c53d5ed96f98" in namespace "projected-8" to be "success or failure"
Aug 14 10:48:42.565: INFO: Pod "downwardapi-volume-6c4709d1-e44c-40c7-90fb-c53d5ed96f98": Phase="Pending", Reason="", readiness=false. Elapsed: 130.777444ms
Aug 14 10:48:44.569: INFO: Pod "downwardapi-volume-6c4709d1-e44c-40c7-90fb-c53d5ed96f98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134777205s
Aug 14 10:48:46.691: INFO: Pod "downwardapi-volume-6c4709d1-e44c-40c7-90fb-c53d5ed96f98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255984592s
Aug 14 10:48:49.122: INFO: Pod "downwardapi-volume-6c4709d1-e44c-40c7-90fb-c53d5ed96f98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.687540164s
Aug 14 10:48:51.126: INFO: Pod "downwardapi-volume-6c4709d1-e44c-40c7-90fb-c53d5ed96f98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.691058144s
STEP: Saw pod success
Aug 14 10:48:51.126: INFO: Pod "downwardapi-volume-6c4709d1-e44c-40c7-90fb-c53d5ed96f98" satisfied condition "success or failure"
Aug 14 10:48:51.128: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6c4709d1-e44c-40c7-90fb-c53d5ed96f98 container client-container: 
STEP: delete the pod
Aug 14 10:48:51.477: INFO: Waiting for pod downwardapi-volume-6c4709d1-e44c-40c7-90fb-c53d5ed96f98 to disappear
Aug 14 10:48:51.781: INFO: Pod downwardapi-volume-6c4709d1-e44c-40c7-90fb-c53d5ed96f98 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:48:51.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8" for this suite.
Aug 14 10:48:57.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:48:57.978: INFO: namespace projected-8 deletion completed in 6.191984382s

• [SLOW TEST:15.690 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:48:57.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 10:48:58.506: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9463e4c2-8396-4011-aa21-65480fddc364", Controller:(*bool)(0xc001ec06ba), BlockOwnerDeletion:(*bool)(0xc001ec06bb)}}
Aug 14 10:48:59.356: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"581aa929-f018-4cfb-a092-3838c3eabcba", Controller:(*bool)(0xc0023e05d2), BlockOwnerDeletion:(*bool)(0xc0023e05d3)}}
Aug 14 10:48:59.916: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"bb72b7ec-9c9b-4402-80eb-093523a8c692", Controller:(*bool)(0xc002837e72), BlockOwnerDeletion:(*bool)(0xc002837e73)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:49:05.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7271" for this suite.
Aug 14 10:49:13.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:49:14.166: INFO: namespace gc-7271 deletion completed in 8.823825974s

• [SLOW TEST:16.188 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:49:14.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6635
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 14 10:49:14.855: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 14 10:49:56.567: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.5 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6635 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 10:49:56.567: INFO: >>> kubeConfig: /root/.kube/config
I0814 10:49:56.605110       6 log.go:172] (0xc001018630) (0xc0022f6c80) Create stream
I0814 10:49:56.605140       6 log.go:172] (0xc001018630) (0xc0022f6c80) Stream added, broadcasting: 1
I0814 10:49:56.606960       6 log.go:172] (0xc001018630) Reply frame received for 1
I0814 10:49:56.606990       6 log.go:172] (0xc001018630) (0xc00290a5a0) Create stream
I0814 10:49:56.607000       6 log.go:172] (0xc001018630) (0xc00290a5a0) Stream added, broadcasting: 3
I0814 10:49:56.607855       6 log.go:172] (0xc001018630) Reply frame received for 3
I0814 10:49:56.607883       6 log.go:172] (0xc001018630) (0xc00290a640) Create stream
I0814 10:49:56.607898       6 log.go:172] (0xc001018630) (0xc00290a640) Stream added, broadcasting: 5
I0814 10:49:56.608680       6 log.go:172] (0xc001018630) Reply frame received for 5
I0814 10:49:57.774096       6 log.go:172] (0xc001018630) Data frame received for 3
I0814 10:49:57.774176       6 log.go:172] (0xc00290a5a0) (3) Data frame handling
I0814 10:49:57.774197       6 log.go:172] (0xc00290a5a0) (3) Data frame sent
I0814 10:49:57.774218       6 log.go:172] (0xc001018630) Data frame received for 5
I0814 10:49:57.774226       6 log.go:172] (0xc00290a640) (5) Data frame handling
I0814 10:49:57.774336       6 log.go:172] (0xc001018630) Data frame received for 3
I0814 10:49:57.774359       6 log.go:172] (0xc00290a5a0) (3) Data frame handling
I0814 10:49:57.776095       6 log.go:172] (0xc001018630) Data frame received for 1
I0814 10:49:57.776128       6 log.go:172] (0xc0022f6c80) (1) Data frame handling
I0814 10:49:57.776157       6 log.go:172] (0xc0022f6c80) (1) Data frame sent
I0814 10:49:57.776186       6 log.go:172] (0xc001018630) (0xc0022f6c80) Stream removed, broadcasting: 1
I0814 10:49:57.776270       6 log.go:172] (0xc001018630) (0xc0022f6c80) Stream removed, broadcasting: 1
I0814 10:49:57.776281       6 log.go:172] (0xc001018630) (0xc00290a5a0) Stream removed, broadcasting: 3
I0814 10:49:57.776365       6 log.go:172] (0xc001018630) (0xc00290a640) Stream removed, broadcasting: 5
Aug 14 10:49:57.776: INFO: Found all expected endpoints: [netserver-0]
Aug 14 10:49:57.780: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.53 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6635 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 10:49:57.780: INFO: >>> kubeConfig: /root/.kube/config
I0814 10:49:57.813775       6 log.go:172] (0xc001f23340) (0xc00225c8c0) Create stream
I0814 10:49:57.813810       6 log.go:172] (0xc001f23340) (0xc00225c8c0) Stream added, broadcasting: 1
I0814 10:49:57.815620       6 log.go:172] (0xc001f23340) Reply frame received for 1
I0814 10:49:57.815647       6 log.go:172] (0xc001f23340) (0xc00056b4a0) Create stream
I0814 10:49:57.815656       6 log.go:172] (0xc001f23340) (0xc00056b4a0) Stream added, broadcasting: 3
I0814 10:49:57.816455       6 log.go:172] (0xc001f23340) Reply frame received for 3
I0814 10:49:57.816485       6 log.go:172] (0xc001f23340) (0xc00225c960) Create stream
I0814 10:49:57.816493       6 log.go:172] (0xc001f23340) (0xc00225c960) Stream added, broadcasting: 5
I0814 10:49:57.817247       6 log.go:172] (0xc001f23340) Reply frame received for 5
I0814 10:49:58.877754       6 log.go:172] (0xc001f23340) Data frame received for 3
I0814 10:49:58.877797       6 log.go:172] (0xc00056b4a0) (3) Data frame handling
I0814 10:49:58.877817       6 log.go:172] (0xc00056b4a0) (3) Data frame sent
I0814 10:49:58.877838       6 log.go:172] (0xc001f23340) Data frame received for 5
I0814 10:49:58.877896       6 log.go:172] (0xc00225c960) (5) Data frame handling
I0814 10:49:58.877930       6 log.go:172] (0xc001f23340) Data frame received for 3
I0814 10:49:58.877940       6 log.go:172] (0xc00056b4a0) (3) Data frame handling
I0814 10:49:58.879602       6 log.go:172] (0xc001f23340) Data frame received for 1
I0814 10:49:58.879625       6 log.go:172] (0xc00225c8c0) (1) Data frame handling
I0814 10:49:58.879667       6 log.go:172] (0xc00225c8c0) (1) Data frame sent
I0814 10:49:58.879685       6 log.go:172] (0xc001f23340) (0xc00225c8c0) Stream removed, broadcasting: 1
I0814 10:49:58.879790       6 log.go:172] (0xc001f23340) (0xc00225c8c0) Stream removed, broadcasting: 1
I0814 10:49:58.879808       6 log.go:172] (0xc001f23340) (0xc00056b4a0) Stream removed, broadcasting: 3
I0814 10:49:58.879815       6 log.go:172] (0xc001f23340) (0xc00225c960) Stream removed, broadcasting: 5
Aug 14 10:49:58.879: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:49:58.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0814 10:49:58.880144       6 log.go:172] (0xc001f23340) Go away received
STEP: Destroying namespace "pod-network-test-6635" for this suite.
Aug 14 10:50:14.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:50:14.779: INFO: namespace pod-network-test-6635 deletion completed in 15.480765867s

• [SLOW TEST:60.612 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:50:14.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-fc4b2604-ec9a-4735-ba83-3d424623b060
STEP: Creating configMap with name cm-test-opt-upd-64a5702d-3d12-4e5d-b7e8-423d216c349c
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-fc4b2604-ec9a-4735-ba83-3d424623b060
STEP: Updating configmap cm-test-opt-upd-64a5702d-3d12-4e5d-b7e8-423d216c349c
STEP: Creating configMap with name cm-test-opt-create-4869167f-d8e3-4902-b732-e13e8d2d07f8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:52:04.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1317" for this suite.
Aug 14 10:52:34.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:52:34.525: INFO: namespace configmap-1317 deletion completed in 30.331435334s

• [SLOW TEST:139.746 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:52:34.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 14 10:52:34.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7176'
Aug 14 10:52:39.158: INFO: stderr: ""
Aug 14 10:52:39.158: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 14 10:52:39.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7176'
Aug 14 10:52:39.310: INFO: stderr: ""
Aug 14 10:52:39.310: INFO: stdout: "update-demo-nautilus-ckrxv update-demo-nautilus-s2mdt "
Aug 14 10:52:39.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ckrxv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7176'
Aug 14 10:52:39.399: INFO: stderr: ""
Aug 14 10:52:39.399: INFO: stdout: ""
Aug 14 10:52:39.399: INFO: update-demo-nautilus-ckrxv is created but not running
Aug 14 10:52:44.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7176'
Aug 14 10:52:44.598: INFO: stderr: ""
Aug 14 10:52:44.598: INFO: stdout: "update-demo-nautilus-ckrxv update-demo-nautilus-s2mdt "
Aug 14 10:52:44.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ckrxv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7176'
Aug 14 10:52:44.688: INFO: stderr: ""
Aug 14 10:52:44.688: INFO: stdout: ""
Aug 14 10:52:44.688: INFO: update-demo-nautilus-ckrxv is created but not running
Aug 14 10:52:49.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7176'
Aug 14 10:52:49.790: INFO: stderr: ""
Aug 14 10:52:49.790: INFO: stdout: "update-demo-nautilus-ckrxv update-demo-nautilus-s2mdt "
Aug 14 10:52:49.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ckrxv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7176'
Aug 14 10:52:49.880: INFO: stderr: ""
Aug 14 10:52:49.880: INFO: stdout: "true"
Aug 14 10:52:49.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ckrxv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7176'
Aug 14 10:52:49.970: INFO: stderr: ""
Aug 14 10:52:49.970: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 14 10:52:49.970: INFO: validating pod update-demo-nautilus-ckrxv
Aug 14 10:52:49.975: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 14 10:52:49.975: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 14 10:52:49.975: INFO: update-demo-nautilus-ckrxv is verified up and running
Aug 14 10:52:49.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s2mdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7176'
Aug 14 10:52:50.067: INFO: stderr: ""
Aug 14 10:52:50.067: INFO: stdout: "true"
Aug 14 10:52:50.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s2mdt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7176'
Aug 14 10:52:50.160: INFO: stderr: ""
Aug 14 10:52:50.160: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 14 10:52:50.160: INFO: validating pod update-demo-nautilus-s2mdt
Aug 14 10:52:50.247: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 14 10:52:50.247: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 14 10:52:50.247: INFO: update-demo-nautilus-s2mdt is verified up and running
STEP: using delete to clean up resources
Aug 14 10:52:50.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7176'
Aug 14 10:52:50.351: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 14 10:52:50.351: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 14 10:52:50.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7176'
Aug 14 10:52:50.440: INFO: stderr: "No resources found.\n"
Aug 14 10:52:50.440: INFO: stdout: ""
Aug 14 10:52:50.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7176 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 14 10:52:50.564: INFO: stderr: ""
Aug 14 10:52:50.564: INFO: stdout: "update-demo-nautilus-ckrxv\nupdate-demo-nautilus-s2mdt\n"
Aug 14 10:52:51.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7176'
Aug 14 10:52:51.161: INFO: stderr: "No resources found.\n"
Aug 14 10:52:51.161: INFO: stdout: ""
Aug 14 10:52:51.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7176 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 14 10:52:51.252: INFO: stderr: ""
Aug 14 10:52:51.252: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:52:51.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7176" for this suite.
Aug 14 10:52:59.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:52:59.475: INFO: namespace kubectl-7176 deletion completed in 8.218848783s

• [SLOW TEST:24.949 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:52:59.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 14 10:53:05.130: INFO: Successfully updated pod "annotationupdate11802dd3-6a09-4bda-a1d9-c0fd044373a1"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:53:09.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3429" for this suite.
Aug 14 10:53:33.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:53:33.494: INFO: namespace projected-3429 deletion completed in 24.322984605s

• [SLOW TEST:34.018 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:53:33.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 14 10:53:34.763: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7265,SelfLink:/api/v1/namespaces/watch-7265/configmaps/e2e-watch-test-resource-version,UID:ccfe97aa-af68-4d79-8f66-0d14b4ca14ec,ResourceVersion:4872479,Generation:0,CreationTimestamp:2020-08-14 10:53:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 14 10:53:34.763: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7265,SelfLink:/api/v1/namespaces/watch-7265/configmaps/e2e-watch-test-resource-version,UID:ccfe97aa-af68-4d79-8f66-0d14b4ca14ec,ResourceVersion:4872482,Generation:0,CreationTimestamp:2020-08-14 10:53:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:53:34.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7265" for this suite.
Aug 14 10:53:41.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:53:41.557: INFO: namespace watch-7265 deletion completed in 6.717702048s

• [SLOW TEST:8.064 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:53:41.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 14 10:53:41.594: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 14 10:53:41.603: INFO: Waiting for terminating namespaces to be deleted...
Aug 14 10:53:41.605: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 14 10:53:41.612: INFO: cassandra-76f5c4d86c-h2nwg from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (1 container statuses recorded)
Aug 14 10:53:41.612: INFO: 	Container cassandra ready: true, restart count 0
Aug 14 10:53:41.612: INFO: homer-74dd4556d9-ws825 from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (1 container statuses recorded)
Aug 14 10:53:41.612: INFO: 	Container homer ready: true, restart count 0
Aug 14 10:53:41.612: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 14 10:53:41.612: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 14 10:53:41.612: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 14 10:53:41.612: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 14 10:53:41.612: INFO: sprout-686cc64cfb-smjks from ims-p7dpm started at 2020-08-13 08:25:21 +0000 UTC (2 container statuses recorded)
Aug 14 10:53:41.612: INFO: 	Container sprout ready: false, restart count 0
Aug 14 10:53:41.612: INFO: 	Container tailer ready: false, restart count 0
Aug 14 10:53:41.612: INFO: homestead-prov-756c8bff5d-d6lsl from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (1 container statuses recorded)
Aug 14 10:53:41.612: INFO: 	Container homestead-prov ready: false, restart count 0
Aug 14 10:53:41.612: INFO: etcd-5cbf55c8c-k46jp from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (1 container statuses recorded)
Aug 14 10:53:41.612: INFO: 	Container etcd ready: true, restart count 0
Aug 14 10:53:41.612: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 14 10:53:41.620: INFO: ralf-57c4654cb8-sctv6 from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (2 container statuses recorded)
Aug 14 10:53:41.620: INFO: 	Container ralf ready: true, restart count 0
Aug 14 10:53:41.620: INFO: 	Container tailer ready: true, restart count 0
Aug 14 10:53:41.620: INFO: homestead-57586d6cdc-g8qmw from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (2 container statuses recorded)
Aug 14 10:53:41.620: INFO: 	Container homestead ready: false, restart count 383
Aug 14 10:53:41.620: INFO: 	Container tailer ready: true, restart count 0
Aug 14 10:53:41.620: INFO: chronos-687b9884c5-g8mpr from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (2 container statuses recorded)
Aug 14 10:53:41.620: INFO: 	Container chronos ready: true, restart count 0
Aug 14 10:53:41.620: INFO: 	Container tailer ready: true, restart count 0
Aug 14 10:53:41.620: INFO: astaire-5ddcdd6b7f-hppqv from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (2 container statuses recorded)
Aug 14 10:53:41.620: INFO: 	Container astaire ready: true, restart count 0
Aug 14 10:53:41.620: INFO: 	Container tailer ready: true, restart count 0
Aug 14 10:53:41.620: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded)
Aug 14 10:53:41.621: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 14 10:53:41.621: INFO: ellis-57b84b6dd7-xv7nx from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (1 container statuses recorded)
Aug 14 10:53:41.621: INFO: 	Container ellis ready: false, restart count 0
Aug 14 10:53:41.621: INFO: bono-5cdb7bfcdd-rq8q2 from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (2 container statuses recorded)
Aug 14 10:53:41.621: INFO: 	Container bono ready: false, restart count 0
Aug 14 10:53:41.621: INFO: 	Container tailer ready: false, restart count 0
Aug 14 10:53:41.621: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded)
Aug 14 10:53:41.621: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162b1d0a02960c60], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:53:42.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7706" for this suite.
Aug 14 10:53:48.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:53:48.770: INFO: namespace sched-pred-7706 deletion completed in 6.099007506s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.212 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:53:48.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 14 10:53:48.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-224'
Aug 14 10:53:48.948: INFO: stderr: ""
Aug 14 10:53:48.948: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 14 10:53:48.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-224'
Aug 14 10:53:55.220: INFO: stderr: ""
Aug 14 10:53:55.220: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:53:55.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-224" for this suite.
Aug 14 10:54:01.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:54:01.556: INFO: namespace kubectl-224 deletion completed in 6.296367443s

• [SLOW TEST:12.786 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:54:01.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 10:54:01.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cccabf1a-bef0-45a1-a325-62437df132c7" in namespace "projected-7374" to be "success or failure"
Aug 14 10:54:01.846: INFO: Pod "downwardapi-volume-cccabf1a-bef0-45a1-a325-62437df132c7": Phase="Pending", Reason="", readiness=false. Elapsed: 117.17423ms
Aug 14 10:54:03.893: INFO: Pod "downwardapi-volume-cccabf1a-bef0-45a1-a325-62437df132c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164580084s
Aug 14 10:54:06.253: INFO: Pod "downwardapi-volume-cccabf1a-bef0-45a1-a325-62437df132c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.523927394s
Aug 14 10:54:08.257: INFO: Pod "downwardapi-volume-cccabf1a-bef0-45a1-a325-62437df132c7": Phase="Running", Reason="", readiness=true. Elapsed: 6.528290615s
Aug 14 10:54:10.262: INFO: Pod "downwardapi-volume-cccabf1a-bef0-45a1-a325-62437df132c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.532807399s
STEP: Saw pod success
Aug 14 10:54:10.262: INFO: Pod "downwardapi-volume-cccabf1a-bef0-45a1-a325-62437df132c7" satisfied condition "success or failure"
Aug 14 10:54:10.265: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-cccabf1a-bef0-45a1-a325-62437df132c7 container client-container: 
STEP: delete the pod
Aug 14 10:54:10.726: INFO: Waiting for pod downwardapi-volume-cccabf1a-bef0-45a1-a325-62437df132c7 to disappear
Aug 14 10:54:10.738: INFO: Pod downwardapi-volume-cccabf1a-bef0-45a1-a325-62437df132c7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:54:10.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7374" for this suite.
Aug 14 10:54:16.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:54:16.856: INFO: namespace projected-7374 deletion completed in 6.115355577s

• [SLOW TEST:15.300 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:54:16.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:54:17.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2468" for this suite.
Aug 14 10:54:23.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:54:23.669: INFO: namespace kubelet-test-2468 deletion completed in 6.132788974s

• [SLOW TEST:6.812 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:54:23.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 14 10:54:30.559: INFO: Successfully updated pod "annotationupdate43b10cb4-d2d0-47b1-983c-a10ba7ab246b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:54:32.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6483" for this suite.
Aug 14 10:54:55.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:54:55.313: INFO: namespace downward-api-6483 deletion completed in 22.491263186s

• [SLOW TEST:31.643 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:54:55.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Aug 14 10:54:56.227: INFO: Waiting up to 5m0s for pod "client-containers-ef130123-8286-4215-a21b-ba286ee007f1" in namespace "containers-3449" to be "success or failure"
Aug 14 10:54:56.290: INFO: Pod "client-containers-ef130123-8286-4215-a21b-ba286ee007f1": Phase="Pending", Reason="", readiness=false. Elapsed: 62.909995ms
Aug 14 10:54:58.294: INFO: Pod "client-containers-ef130123-8286-4215-a21b-ba286ee007f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067204294s
Aug 14 10:55:00.355: INFO: Pod "client-containers-ef130123-8286-4215-a21b-ba286ee007f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127466109s
Aug 14 10:55:02.685: INFO: Pod "client-containers-ef130123-8286-4215-a21b-ba286ee007f1": Phase="Running", Reason="", readiness=true. Elapsed: 6.457959109s
Aug 14 10:55:04.690: INFO: Pod "client-containers-ef130123-8286-4215-a21b-ba286ee007f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.463113326s
STEP: Saw pod success
Aug 14 10:55:04.690: INFO: Pod "client-containers-ef130123-8286-4215-a21b-ba286ee007f1" satisfied condition "success or failure"
Aug 14 10:55:04.694: INFO: Trying to get logs from node iruya-worker2 pod client-containers-ef130123-8286-4215-a21b-ba286ee007f1 container test-container: 
STEP: delete the pod
Aug 14 10:55:04.847: INFO: Waiting for pod client-containers-ef130123-8286-4215-a21b-ba286ee007f1 to disappear
Aug 14 10:55:04.896: INFO: Pod client-containers-ef130123-8286-4215-a21b-ba286ee007f1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:55:04.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3449" for this suite.
Aug 14 10:55:10.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:55:11.021: INFO: namespace containers-3449 deletion completed in 6.120459125s

• [SLOW TEST:15.707 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:55:11.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Aug 14 10:55:11.644: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:55:11.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6812" for this suite.
Aug 14 10:55:20.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:55:20.086: INFO: namespace kubectl-6812 deletion completed in 8.166896056s

• [SLOW TEST:9.065 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:55:20.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:55:38.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6629" for this suite.
Aug 14 10:55:44.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:55:44.994: INFO: namespace watch-6629 deletion completed in 6.289461872s

• [SLOW TEST:24.908 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:55:44.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6555/configmap-test-4cbe67cd-0608-443e-96b5-d017223365cb
STEP: Creating a pod to test consume configMaps
Aug 14 10:55:45.125: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc326f7a-113b-46b6-ad25-a6780bf25fad" in namespace "configmap-6555" to be "success or failure"
Aug 14 10:55:45.146: INFO: Pod "pod-configmaps-dc326f7a-113b-46b6-ad25-a6780bf25fad": Phase="Pending", Reason="", readiness=false. Elapsed: 21.211361ms
Aug 14 10:55:47.447: INFO: Pod "pod-configmaps-dc326f7a-113b-46b6-ad25-a6780bf25fad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322106782s
Aug 14 10:55:49.451: INFO: Pod "pod-configmaps-dc326f7a-113b-46b6-ad25-a6780bf25fad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326177436s
Aug 14 10:55:51.455: INFO: Pod "pod-configmaps-dc326f7a-113b-46b6-ad25-a6780bf25fad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.330249777s
STEP: Saw pod success
Aug 14 10:55:51.455: INFO: Pod "pod-configmaps-dc326f7a-113b-46b6-ad25-a6780bf25fad" satisfied condition "success or failure"
Aug 14 10:55:51.458: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-dc326f7a-113b-46b6-ad25-a6780bf25fad container env-test: 
STEP: delete the pod
Aug 14 10:55:52.375: INFO: Waiting for pod pod-configmaps-dc326f7a-113b-46b6-ad25-a6780bf25fad to disappear
Aug 14 10:55:52.727: INFO: Pod pod-configmaps-dc326f7a-113b-46b6-ad25-a6780bf25fad no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:55:52.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6555" for this suite.
Aug 14 10:56:01.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:56:02.094: INFO: namespace configmap-6555 deletion completed in 9.363496416s

• [SLOW TEST:17.099 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:56:02.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-3394459e-3da5-41ac-a1d2-c27b22cb8b40 in namespace container-probe-9101
Aug 14 10:56:08.521: INFO: Started pod liveness-3394459e-3da5-41ac-a1d2-c27b22cb8b40 in namespace container-probe-9101
STEP: checking the pod's current state and verifying that restartCount is present
Aug 14 10:56:08.524: INFO: Initial restart count of pod liveness-3394459e-3da5-41ac-a1d2-c27b22cb8b40 is 0
Aug 14 10:56:24.721: INFO: Restart count of pod container-probe-9101/liveness-3394459e-3da5-41ac-a1d2-c27b22cb8b40 is now 1 (16.197347089s elapsed)
Aug 14 10:56:46.213: INFO: Restart count of pod container-probe-9101/liveness-3394459e-3da5-41ac-a1d2-c27b22cb8b40 is now 2 (37.688983709s elapsed)
Aug 14 10:57:04.667: INFO: Restart count of pod container-probe-9101/liveness-3394459e-3da5-41ac-a1d2-c27b22cb8b40 is now 3 (56.142421578s elapsed)
Aug 14 10:57:22.752: INFO: Restart count of pod container-probe-9101/liveness-3394459e-3da5-41ac-a1d2-c27b22cb8b40 is now 4 (1m14.227941621s elapsed)
Aug 14 10:58:33.816: INFO: Restart count of pod container-probe-9101/liveness-3394459e-3da5-41ac-a1d2-c27b22cb8b40 is now 5 (2m25.291850422s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:58:34.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9101" for this suite.
Aug 14 10:58:42.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:58:43.046: INFO: namespace container-probe-9101 deletion completed in 8.345388586s

• [SLOW TEST:160.952 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:58:43.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 10:58:43.692: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 14 10:58:43.703: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:43.915: INFO: Number of nodes with available pods: 0
Aug 14 10:58:43.915: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 10:58:45.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:46.329: INFO: Number of nodes with available pods: 0
Aug 14 10:58:46.329: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 10:58:46.925: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:46.928: INFO: Number of nodes with available pods: 0
Aug 14 10:58:46.928: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 10:58:48.036: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:48.287: INFO: Number of nodes with available pods: 0
Aug 14 10:58:48.287: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 10:58:49.192: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:49.715: INFO: Number of nodes with available pods: 0
Aug 14 10:58:49.715: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 10:58:49.920: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:49.923: INFO: Number of nodes with available pods: 0
Aug 14 10:58:49.923: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 10:58:50.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:50.955: INFO: Number of nodes with available pods: 0
Aug 14 10:58:50.955: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 10:58:51.953: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:51.956: INFO: Number of nodes with available pods: 1
Aug 14 10:58:51.956: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 10:58:52.920: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:52.923: INFO: Number of nodes with available pods: 2
Aug 14 10:58:52.923: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 14 10:58:53.079: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:53.079: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:53.134: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:54.161: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:54.161: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:54.166: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:55.139: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:55.139: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:55.143: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:56.155: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:56.155: INFO: Pod daemon-set-7z2cq is not available
Aug 14 10:58:56.155: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:56.159: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:57.138: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:57.138: INFO: Pod daemon-set-7z2cq is not available
Aug 14 10:58:57.138: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:57.141: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:58.377: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:58.377: INFO: Pod daemon-set-7z2cq is not available
Aug 14 10:58:58.377: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:58.666: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:58:59.138: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:59.138: INFO: Pod daemon-set-7z2cq is not available
Aug 14 10:58:59.138: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:58:59.142: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:00.140: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:00.140: INFO: Pod daemon-set-7z2cq is not available
Aug 14 10:59:00.140: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:00.144: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:01.139: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:01.139: INFO: Pod daemon-set-7z2cq is not available
Aug 14 10:59:01.139: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:01.144: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:02.186: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:02.186: INFO: Pod daemon-set-7z2cq is not available
Aug 14 10:59:02.186: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:02.190: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:03.138: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:03.138: INFO: Pod daemon-set-7z2cq is not available
Aug 14 10:59:03.138: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:03.141: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:04.137: INFO: Wrong image for pod: daemon-set-7z2cq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:04.137: INFO: Pod daemon-set-7z2cq is not available
Aug 14 10:59:04.137: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:04.140: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:05.168: INFO: Pod daemon-set-c5h4z is not available
Aug 14 10:59:05.168: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:05.192: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:06.137: INFO: Pod daemon-set-c5h4z is not available
Aug 14 10:59:06.137: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:06.139: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:07.364: INFO: Pod daemon-set-c5h4z is not available
Aug 14 10:59:07.364: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:07.624: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:08.139: INFO: Pod daemon-set-c5h4z is not available
Aug 14 10:59:08.139: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:08.143: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:09.139: INFO: Pod daemon-set-c5h4z is not available
Aug 14 10:59:09.139: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:09.142: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:10.138: INFO: Pod daemon-set-c5h4z is not available
Aug 14 10:59:10.138: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:10.141: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:11.138: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:11.141: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:12.139: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:12.143: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:13.139: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:13.139: INFO: Pod daemon-set-trs5d is not available
Aug 14 10:59:13.143: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:14.138: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:14.138: INFO: Pod daemon-set-trs5d is not available
Aug 14 10:59:14.142: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:15.143: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:15.143: INFO: Pod daemon-set-trs5d is not available
Aug 14 10:59:15.147: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:16.138: INFO: Wrong image for pod: daemon-set-trs5d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 14 10:59:16.138: INFO: Pod daemon-set-trs5d is not available
Aug 14 10:59:16.142: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:17.168: INFO: Pod daemon-set-9tztb is not available
Aug 14 10:59:17.173: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 14 10:59:17.231: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:17.234: INFO: Number of nodes with available pods: 1
Aug 14 10:59:17.234: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 10:59:18.239: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:18.242: INFO: Number of nodes with available pods: 1
Aug 14 10:59:18.242: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 10:59:19.456: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:19.461: INFO: Number of nodes with available pods: 1
Aug 14 10:59:19.461: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 10:59:20.239: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:20.243: INFO: Number of nodes with available pods: 1
Aug 14 10:59:20.243: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 10:59:21.239: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:21.242: INFO: Number of nodes with available pods: 1
Aug 14 10:59:21.242: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 10:59:22.324: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 10:59:22.328: INFO: Number of nodes with available pods: 2
Aug 14 10:59:22.328: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1065, will wait for the garbage collector to delete the pods
Aug 14 10:59:22.400: INFO: Deleting DaemonSet.extensions daemon-set took: 6.394804ms
Aug 14 10:59:22.701: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.385764ms
Aug 14 10:59:36.341: INFO: Number of nodes with available pods: 0
Aug 14 10:59:36.341: INFO: Number of running nodes: 0, number of available pods: 0
Aug 14 10:59:36.344: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1065/daemonsets","resourceVersion":"4873583"},"items":null}

Aug 14 10:59:36.346: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1065/pods","resourceVersion":"4873583"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:59:36.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1065" for this suite.
Aug 14 10:59:42.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:59:42.671: INFO: namespace daemonsets-1065 deletion completed in 6.314527132s

• [SLOW TEST:59.624 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:59:42.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-a1a55bdb-7921-4480-9fd8-ff2b0ac35074
STEP: Creating a pod to test consume secrets
Aug 14 10:59:42.780: INFO: Waiting up to 5m0s for pod "pod-secrets-a5871369-4376-4b6f-94aa-b3c45881cc27" in namespace "secrets-3194" to be "success or failure"
Aug 14 10:59:42.808: INFO: Pod "pod-secrets-a5871369-4376-4b6f-94aa-b3c45881cc27": Phase="Pending", Reason="", readiness=false. Elapsed: 28.244299ms
Aug 14 10:59:44.813: INFO: Pod "pod-secrets-a5871369-4376-4b6f-94aa-b3c45881cc27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03306835s
Aug 14 10:59:46.898: INFO: Pod "pod-secrets-a5871369-4376-4b6f-94aa-b3c45881cc27": Phase="Running", Reason="", readiness=true. Elapsed: 4.118188853s
Aug 14 10:59:48.902: INFO: Pod "pod-secrets-a5871369-4376-4b6f-94aa-b3c45881cc27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122225684s
STEP: Saw pod success
Aug 14 10:59:48.902: INFO: Pod "pod-secrets-a5871369-4376-4b6f-94aa-b3c45881cc27" satisfied condition "success or failure"
Aug 14 10:59:48.905: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a5871369-4376-4b6f-94aa-b3c45881cc27 container secret-volume-test: 
STEP: delete the pod
Aug 14 10:59:48.924: INFO: Waiting for pod pod-secrets-a5871369-4376-4b6f-94aa-b3c45881cc27 to disappear
Aug 14 10:59:48.928: INFO: Pod pod-secrets-a5871369-4376-4b6f-94aa-b3c45881cc27 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 10:59:48.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3194" for this suite.
Aug 14 10:59:56.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 10:59:57.192: INFO: namespace secrets-3194 deletion completed in 8.261236506s

• [SLOW TEST:14.520 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 10:59:57.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:00:25.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9916" for this suite.
Aug 14 11:00:31.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:00:31.566: INFO: namespace namespaces-9916 deletion completed in 6.117389325s
STEP: Destroying namespace "nsdeletetest-2608" for this suite.
Aug 14 11:00:31.568: INFO: Namespace nsdeletetest-2608 was already deleted
STEP: Destroying namespace "nsdeletetest-187" for this suite.
Aug 14 11:00:39.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:00:39.664: INFO: namespace nsdeletetest-187 deletion completed in 8.09576294s

• [SLOW TEST:42.472 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:00:39.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Aug 14 11:00:39.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 14 11:00:39.885: INFO: stderr: ""
Aug 14 11:00:39.885: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:38261\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:38261/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:00:39.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7714" for this suite.
Aug 14 11:00:45.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:00:46.013: INFO: namespace kubectl-7714 deletion completed in 6.124063798s

• [SLOW TEST:6.348 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:00:46.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 11:00:55.337: INFO: Waiting up to 5m0s for pod "client-envvars-60d28914-3608-488b-b542-bd63a1aa8f3b" in namespace "pods-5530" to be "success or failure"
Aug 14 11:00:55.481: INFO: Pod "client-envvars-60d28914-3608-488b-b542-bd63a1aa8f3b": Phase="Pending", Reason="", readiness=false. Elapsed: 143.703543ms
Aug 14 11:00:58.110: INFO: Pod "client-envvars-60d28914-3608-488b-b542-bd63a1aa8f3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.772862387s
Aug 14 11:01:00.113: INFO: Pod "client-envvars-60d28914-3608-488b-b542-bd63a1aa8f3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.776234479s
Aug 14 11:01:02.265: INFO: Pod "client-envvars-60d28914-3608-488b-b542-bd63a1aa8f3b": Phase="Running", Reason="", readiness=true. Elapsed: 6.927503231s
Aug 14 11:01:04.268: INFO: Pod "client-envvars-60d28914-3608-488b-b542-bd63a1aa8f3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.930730843s
STEP: Saw pod success
Aug 14 11:01:04.268: INFO: Pod "client-envvars-60d28914-3608-488b-b542-bd63a1aa8f3b" satisfied condition "success or failure"
Aug 14 11:01:04.270: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-60d28914-3608-488b-b542-bd63a1aa8f3b container env3cont: 
STEP: delete the pod
Aug 14 11:01:04.320: INFO: Waiting for pod client-envvars-60d28914-3608-488b-b542-bd63a1aa8f3b to disappear
Aug 14 11:01:04.588: INFO: Pod client-envvars-60d28914-3608-488b-b542-bd63a1aa8f3b no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:01:04.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5530" for this suite.
Aug 14 11:01:44.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:01:44.955: INFO: namespace pods-5530 deletion completed in 40.363832737s

• [SLOW TEST:58.942 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:01:44.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 11:01:45.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 14 11:01:45.776: INFO: stderr: ""
Aug 14 11:01:45.776: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-08-14T09:55:55Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:08:45Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:01:45.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5295" for this suite.
Aug 14 11:01:51.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:01:51.905: INFO: namespace kubectl-5295 deletion completed in 6.124543855s

• [SLOW TEST:6.950 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:01:51.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 11:01:52.065: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d7a9fea-f293-4bcf-ada9-ff7224996103" in namespace "projected-9784" to be "success or failure"
Aug 14 11:01:52.114: INFO: Pod "downwardapi-volume-4d7a9fea-f293-4bcf-ada9-ff7224996103": Phase="Pending", Reason="", readiness=false. Elapsed: 48.813267ms
Aug 14 11:01:54.218: INFO: Pod "downwardapi-volume-4d7a9fea-f293-4bcf-ada9-ff7224996103": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152802285s
Aug 14 11:01:56.254: INFO: Pod "downwardapi-volume-4d7a9fea-f293-4bcf-ada9-ff7224996103": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189315672s
Aug 14 11:01:58.410: INFO: Pod "downwardapi-volume-4d7a9fea-f293-4bcf-ada9-ff7224996103": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.345450666s
STEP: Saw pod success
Aug 14 11:01:58.410: INFO: Pod "downwardapi-volume-4d7a9fea-f293-4bcf-ada9-ff7224996103" satisfied condition "success or failure"
Aug 14 11:01:58.413: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4d7a9fea-f293-4bcf-ada9-ff7224996103 container client-container: 
STEP: delete the pod
Aug 14 11:01:58.662: INFO: Waiting for pod downwardapi-volume-4d7a9fea-f293-4bcf-ada9-ff7224996103 to disappear
Aug 14 11:01:58.721: INFO: Pod downwardapi-volume-4d7a9fea-f293-4bcf-ada9-ff7224996103 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:01:58.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9784" for this suite.
Aug 14 11:02:07.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:02:08.044: INFO: namespace projected-9784 deletion completed in 9.317894119s

• [SLOW TEST:16.138 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:02:08.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 14 11:02:22.733: INFO: Successfully updated pod "labelsupdate43476fa2-ea01-43ba-8d12-052450fce08e"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:02:25.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-790" for this suite.
Aug 14 11:02:52.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:02:52.712: INFO: namespace downward-api-790 deletion completed in 26.716177045s

• [SLOW TEST:44.668 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:02:52.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5999.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5999.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5999.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5999.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5999.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5999.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5999.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5999.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5999.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5999.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 179.244.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.244.179_udp@PTR;check="$$(dig +tcp +noall +answer +search 179.244.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.244.179_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5999.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5999.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5999.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5999.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5999.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5999.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5999.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5999.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5999.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5999.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5999.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 179.244.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.244.179_udp@PTR;check="$$(dig +tcp +noall +answer +search 179.244.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.244.179_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 14 11:03:07.352: INFO: Unable to read wheezy_udp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:07.355: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:07.358: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:07.360: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:07.798: INFO: Unable to read jessie_udp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:07.801: INFO: Unable to read jessie_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:07.803: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:07.805: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:07.818: INFO: Lookups using dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036 failed for: [wheezy_udp@dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_udp@dns-test-service.dns-5999.svc.cluster.local jessie_tcp@dns-test-service.dns-5999.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local]

Aug 14 11:03:12.848: INFO: Unable to read wheezy_udp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:12.851: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:12.855: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:12.858: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:13.297: INFO: Unable to read jessie_udp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:13.300: INFO: Unable to read jessie_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:13.302: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:13.304: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:13.319: INFO: Lookups using dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036 failed for: [wheezy_udp@dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_udp@dns-test-service.dns-5999.svc.cluster.local jessie_tcp@dns-test-service.dns-5999.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local]

Aug 14 11:03:17.938: INFO: Unable to read wheezy_udp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:17.941: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:17.962: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:17.965: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:17.983: INFO: Unable to read jessie_udp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:17.986: INFO: Unable to read jessie_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:17.988: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:17.993: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:18.006: INFO: Lookups using dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036 failed for: [wheezy_udp@dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_udp@dns-test-service.dns-5999.svc.cluster.local jessie_tcp@dns-test-service.dns-5999.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local]

Aug 14 11:03:22.970: INFO: Unable to read wheezy_udp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:23.207: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:23.210: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:23.213: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:23.236: INFO: Unable to read jessie_udp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:23.238: INFO: Unable to read jessie_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:23.240: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:23.242: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:23.254: INFO: Lookups using dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036 failed for: [wheezy_udp@dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_udp@dns-test-service.dns-5999.svc.cluster.local jessie_tcp@dns-test-service.dns-5999.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local]

Aug 14 11:03:27.822: INFO: Unable to read wheezy_udp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:27.826: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:27.829: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:27.832: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:27.886: INFO: Unable to read jessie_udp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:27.888: INFO: Unable to read jessie_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:27.890: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:27.892: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:27.906: INFO: Lookups using dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036 failed for: [wheezy_udp@dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_udp@dns-test-service.dns-5999.svc.cluster.local jessie_tcp@dns-test-service.dns-5999.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local]

Aug 14 11:03:32.950: INFO: Unable to read wheezy_udp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:33.410: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:33.639: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:33.642: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:33.693: INFO: Unable to read jessie_tcp@dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:33.696: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:33.698: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local from pod dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036: the server could not find the requested resource (get pods dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036)
Aug 14 11:03:33.712: INFO: Lookups using dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036 failed for: [wheezy_udp@dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@dns-test-service.dns-5999.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_tcp@dns-test-service.dns-5999.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5999.svc.cluster.local]

Aug 14 11:03:37.985: INFO: DNS probes using dns-5999/dns-test-dd910760-34e4-4b50-ad42-a61e0a4f3036 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:03:39.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5999" for this suite.
Aug 14 11:03:47.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:03:47.481: INFO: namespace dns-5999 deletion completed in 8.089251558s

• [SLOW TEST:54.769 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:03:47.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 14 11:03:48.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8344'
Aug 14 11:04:11.214: INFO: stderr: ""
Aug 14 11:04:11.214: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 14 11:04:12.219: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:04:12.219: INFO: Found 0 / 1
Aug 14 11:04:13.230: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:04:13.230: INFO: Found 0 / 1
Aug 14 11:04:14.244: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:04:14.244: INFO: Found 0 / 1
Aug 14 11:04:15.218: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:04:15.218: INFO: Found 0 / 1
Aug 14 11:04:16.221: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:04:16.221: INFO: Found 1 / 1
Aug 14 11:04:16.221: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 14 11:04:16.748: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:04:16.748: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 14 11:04:16.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-m8wlv --namespace=kubectl-8344 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 14 11:04:16.847: INFO: stderr: ""
Aug 14 11:04:16.847: INFO: stdout: "pod/redis-master-m8wlv patched\n"
STEP: checking annotations
Aug 14 11:04:16.861: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:04:16.861: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:04:16.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8344" for this suite.
Aug 14 11:04:40.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:04:40.973: INFO: namespace kubectl-8344 deletion completed in 24.109292712s

• [SLOW TEST:53.492 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:04:40.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 14 11:04:41.035: INFO: Waiting up to 5m0s for pod "pod-a2b1fe7c-eefa-4a6a-8325-467d8c3a164a" in namespace "emptydir-3201" to be "success or failure"
Aug 14 11:04:41.045: INFO: Pod "pod-a2b1fe7c-eefa-4a6a-8325-467d8c3a164a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.002576ms
Aug 14 11:04:43.050: INFO: Pod "pod-a2b1fe7c-eefa-4a6a-8325-467d8c3a164a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01445522s
Aug 14 11:04:45.053: INFO: Pod "pod-a2b1fe7c-eefa-4a6a-8325-467d8c3a164a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018176624s
Aug 14 11:04:47.082: INFO: Pod "pod-a2b1fe7c-eefa-4a6a-8325-467d8c3a164a": Phase="Running", Reason="", readiness=true. Elapsed: 6.04669745s
Aug 14 11:04:49.085: INFO: Pod "pod-a2b1fe7c-eefa-4a6a-8325-467d8c3a164a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04973431s
STEP: Saw pod success
Aug 14 11:04:49.085: INFO: Pod "pod-a2b1fe7c-eefa-4a6a-8325-467d8c3a164a" satisfied condition "success or failure"
Aug 14 11:04:49.087: INFO: Trying to get logs from node iruya-worker pod pod-a2b1fe7c-eefa-4a6a-8325-467d8c3a164a container test-container: 
STEP: delete the pod
Aug 14 11:04:49.108: INFO: Waiting for pod pod-a2b1fe7c-eefa-4a6a-8325-467d8c3a164a to disappear
Aug 14 11:04:49.117: INFO: Pod pod-a2b1fe7c-eefa-4a6a-8325-467d8c3a164a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:04:49.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3201" for this suite.
Aug 14 11:04:55.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:04:55.206: INFO: namespace emptydir-3201 deletion completed in 6.085534424s

• [SLOW TEST:14.233 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:04:55.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3508
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 14 11:04:55.654: INFO: Found 0 stateful pods, waiting for 3
Aug 14 11:05:05.658: INFO: Found 2 stateful pods, waiting for 3
Aug 14 11:05:15.659: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:05:15.659: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:05:15.659: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 14 11:05:25.658: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:05:25.658: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:05:25.658: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:05:25.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3508 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 14 11:05:26.060: INFO: stderr: "I0814 11:05:25.797834    1820 log.go:172] (0xc000a0c6e0) (0xc0005dea00) Create stream\nI0814 11:05:25.797898    1820 log.go:172] (0xc000a0c6e0) (0xc0005dea00) Stream added, broadcasting: 1\nI0814 11:05:25.800675    1820 log.go:172] (0xc000a0c6e0) Reply frame received for 1\nI0814 11:05:25.800960    1820 log.go:172] (0xc000a0c6e0) (0xc000a30000) Create stream\nI0814 11:05:25.801093    1820 log.go:172] (0xc000a0c6e0) (0xc000a30000) Stream added, broadcasting: 3\nI0814 11:05:25.802613    1820 log.go:172] (0xc000a0c6e0) Reply frame received for 3\nI0814 11:05:25.802641    1820 log.go:172] (0xc000a0c6e0) (0xc0005de140) Create stream\nI0814 11:05:25.802654    1820 log.go:172] (0xc000a0c6e0) (0xc0005de140) Stream added, broadcasting: 5\nI0814 11:05:25.803573    1820 log.go:172] (0xc000a0c6e0) Reply frame received for 5\nI0814 11:05:25.865993    1820 log.go:172] (0xc000a0c6e0) Data frame received for 5\nI0814 11:05:25.866016    1820 log.go:172] (0xc0005de140) (5) Data frame handling\nI0814 11:05:25.866028    1820 log.go:172] (0xc0005de140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0814 11:05:26.049880    1820 log.go:172] (0xc000a0c6e0) Data frame received for 5\nI0814 11:05:26.049922    1820 log.go:172] (0xc0005de140) (5) Data frame handling\nI0814 11:05:26.049946    1820 log.go:172] (0xc000a0c6e0) Data frame received for 3\nI0814 11:05:26.049955    1820 log.go:172] (0xc000a30000) (3) Data frame handling\nI0814 11:05:26.049965    1820 log.go:172] (0xc000a30000) (3) Data frame sent\nI0814 11:05:26.049974    1820 log.go:172] (0xc000a0c6e0) Data frame received for 3\nI0814 11:05:26.049983    1820 log.go:172] (0xc000a30000) (3) Data frame handling\nI0814 11:05:26.052214    1820 log.go:172] (0xc000a0c6e0) Data frame received for 1\nI0814 11:05:26.052247    1820 log.go:172] (0xc0005dea00) (1) Data frame handling\nI0814 11:05:26.052261    1820 log.go:172] (0xc0005dea00) (1) Data frame sent\nI0814 11:05:26.052276    1820 log.go:172] (0xc000a0c6e0) (0xc0005dea00) Stream removed, broadcasting: 1\nI0814 11:05:26.052299    1820 log.go:172] (0xc000a0c6e0) Go away received\nI0814 11:05:26.052939    1820 log.go:172] (0xc000a0c6e0) (0xc0005dea00) Stream removed, broadcasting: 1\nI0814 11:05:26.052974    1820 log.go:172] (0xc000a0c6e0) (0xc000a30000) Stream removed, broadcasting: 3\nI0814 11:05:26.052987    1820 log.go:172] (0xc000a0c6e0) (0xc0005de140) Stream removed, broadcasting: 5\n"
Aug 14 11:05:26.061: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 14 11:05:26.061: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 14 11:05:36.477: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 14 11:05:46.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3508 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:05:47.354: INFO: stderr: "I0814 11:05:47.276808    1840 log.go:172] (0xc00093a370) (0xc00057e6e0) Create stream\nI0814 11:05:47.276867    1840 log.go:172] (0xc00093a370) (0xc00057e6e0) Stream added, broadcasting: 1\nI0814 11:05:47.281787    1840 log.go:172] (0xc00093a370) Reply frame received for 1\nI0814 11:05:47.281839    1840 log.go:172] (0xc00093a370) (0xc00057e000) Create stream\nI0814 11:05:47.281850    1840 log.go:172] (0xc00093a370) (0xc00057e000) Stream added, broadcasting: 3\nI0814 11:05:47.282938    1840 log.go:172] (0xc00093a370) Reply frame received for 3\nI0814 11:05:47.282973    1840 log.go:172] (0xc00093a370) (0xc00057e0a0) Create stream\nI0814 11:05:47.282983    1840 log.go:172] (0xc00093a370) (0xc00057e0a0) Stream added, broadcasting: 5\nI0814 11:05:47.284366    1840 log.go:172] (0xc00093a370) Reply frame received for 5\nI0814 11:05:47.348088    1840 log.go:172] (0xc00093a370) Data frame received for 5\nI0814 11:05:47.348136    1840 log.go:172] (0xc00057e0a0) (5) Data frame handling\nI0814 11:05:47.348154    1840 log.go:172] (0xc00057e0a0) (5) Data frame sent\nI0814 11:05:47.348168    1840 log.go:172] (0xc00093a370) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0814 11:05:47.348179    1840 log.go:172] (0xc00057e0a0) (5) Data frame handling\nI0814 11:05:47.348211    1840 log.go:172] (0xc00093a370) Data frame received for 3\nI0814 11:05:47.348237    1840 log.go:172] (0xc00057e000) (3) Data frame handling\nI0814 11:05:47.348251    1840 log.go:172] (0xc00057e000) (3) Data frame sent\nI0814 11:05:47.348260    1840 log.go:172] (0xc00093a370) Data frame received for 3\nI0814 11:05:47.348267    1840 log.go:172] (0xc00057e000) (3) Data frame handling\nI0814 11:05:47.349262    1840 log.go:172] (0xc00093a370) Data frame received for 1\nI0814 11:05:47.349284    1840 log.go:172] (0xc00057e6e0) (1) Data frame handling\nI0814 11:05:47.349293    1840 log.go:172] (0xc00057e6e0) (1) Data frame sent\nI0814 11:05:47.349391    1840 log.go:172] (0xc00093a370) (0xc00057e6e0) Stream removed, broadcasting: 1\nI0814 11:05:47.349467    1840 log.go:172] (0xc00093a370) Go away received\nI0814 11:05:47.349664    1840 log.go:172] (0xc00093a370) (0xc00057e6e0) Stream removed, broadcasting: 1\nI0814 11:05:47.349678    1840 log.go:172] (0xc00093a370) (0xc00057e000) Stream removed, broadcasting: 3\nI0814 11:05:47.349686    1840 log.go:172] (0xc00093a370) (0xc00057e0a0) Stream removed, broadcasting: 5\n"
Aug 14 11:05:47.354: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 14 11:05:47.354: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 14 11:05:57.608: INFO: Waiting for StatefulSet statefulset-3508/ss2 to complete update
Aug 14 11:05:57.608: INFO: Waiting for Pod statefulset-3508/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 14 11:05:57.608: INFO: Waiting for Pod statefulset-3508/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 14 11:06:09.141: INFO: Waiting for StatefulSet statefulset-3508/ss2 to complete update
Aug 14 11:06:09.141: INFO: Waiting for Pod statefulset-3508/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 14 11:06:09.141: INFO: Waiting for Pod statefulset-3508/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 14 11:06:18.026: INFO: Waiting for StatefulSet statefulset-3508/ss2 to complete update
Aug 14 11:06:18.026: INFO: Waiting for Pod statefulset-3508/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 14 11:06:27.635: INFO: Waiting for StatefulSet statefulset-3508/ss2 to complete update
Aug 14 11:06:27.635: INFO: Waiting for Pod statefulset-3508/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 14 11:06:38.428: INFO: Waiting for StatefulSet statefulset-3508/ss2 to complete update
STEP: Rolling back to a previous revision
Aug 14 11:06:47.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3508 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 14 11:06:48.469: INFO: stderr: "I0814 11:06:48.066028    1860 log.go:172] (0xc00057a580) (0xc0003da960) Create stream\nI0814 11:06:48.066071    1860 log.go:172] (0xc00057a580) (0xc0003da960) Stream added, broadcasting: 1\nI0814 11:06:48.067709    1860 log.go:172] (0xc00057a580) Reply frame received for 1\nI0814 11:06:48.067740    1860 log.go:172] (0xc00057a580) (0xc0001e8140) Create stream\nI0814 11:06:48.067749    1860 log.go:172] (0xc00057a580) (0xc0001e8140) Stream added, broadcasting: 3\nI0814 11:06:48.068555    1860 log.go:172] (0xc00057a580) Reply frame received for 3\nI0814 11:06:48.068607    1860 log.go:172] (0xc00057a580) (0xc000832000) Create stream\nI0814 11:06:48.068626    1860 log.go:172] (0xc00057a580) (0xc000832000) Stream added, broadcasting: 5\nI0814 11:06:48.069545    1860 log.go:172] (0xc00057a580) Reply frame received for 5\nI0814 11:06:48.206721    1860 log.go:172] (0xc00057a580) Data frame received for 5\nI0814 11:06:48.206742    1860 log.go:172] (0xc000832000) (5) Data frame handling\nI0814 11:06:48.206755    1860 log.go:172] (0xc000832000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0814 11:06:48.461596    1860 log.go:172] (0xc00057a580) Data frame received for 3\nI0814 11:06:48.461633    1860 log.go:172] (0xc0001e8140) (3) Data frame handling\nI0814 11:06:48.461647    1860 log.go:172] (0xc0001e8140) (3) Data frame sent\nI0814 11:06:48.461657    1860 log.go:172] (0xc00057a580) Data frame received for 3\nI0814 11:06:48.461665    1860 log.go:172] (0xc0001e8140) (3) Data frame handling\nI0814 11:06:48.461694    1860 log.go:172] (0xc00057a580) Data frame received for 5\nI0814 11:06:48.461705    1860 log.go:172] (0xc000832000) (5) Data frame handling\nI0814 11:06:48.462745    1860 log.go:172] (0xc00057a580) Data frame received for 1\nI0814 11:06:48.462759    1860 log.go:172] (0xc0003da960) (1) Data frame handling\nI0814 11:06:48.462765    1860 log.go:172] (0xc0003da960) (1) Data frame sent\nI0814 11:06:48.462775    1860 log.go:172] (0xc00057a580) (0xc0003da960) Stream removed, broadcasting: 1\nI0814 11:06:48.462789    1860 log.go:172] (0xc00057a580) Go away received\nI0814 11:06:48.463061    1860 log.go:172] (0xc00057a580) (0xc0003da960) Stream removed, broadcasting: 1\nI0814 11:06:48.463079    1860 log.go:172] (0xc00057a580) (0xc0001e8140) Stream removed, broadcasting: 3\nI0814 11:06:48.463093    1860 log.go:172] (0xc00057a580) (0xc000832000) Stream removed, broadcasting: 5\n"
Aug 14 11:06:48.469: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 14 11:06:48.469: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 14 11:06:58.496: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 14 11:07:08.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3508 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:07:08.954: INFO: stderr: "I0814 11:07:08.822725    1880 log.go:172] (0xc000a32420) (0xc0008e4640) Create stream\nI0814 11:07:08.822773    1880 log.go:172] (0xc000a32420) (0xc0008e4640) Stream added, broadcasting: 1\nI0814 11:07:08.824424    1880 log.go:172] (0xc000a32420) Reply frame received for 1\nI0814 11:07:08.824448    1880 log.go:172] (0xc000a32420) (0xc0008e46e0) Create stream\nI0814 11:07:08.824459    1880 log.go:172] (0xc000a32420) (0xc0008e46e0) Stream added, broadcasting: 3\nI0814 11:07:08.825383    1880 log.go:172] (0xc000a32420) Reply frame received for 3\nI0814 11:07:08.825410    1880 log.go:172] (0xc000a32420) (0xc0007ce000) Create stream\nI0814 11:07:08.825420    1880 log.go:172] (0xc000a32420) (0xc0007ce000) Stream added, broadcasting: 5\nI0814 11:07:08.826075    1880 log.go:172] (0xc000a32420) Reply frame received for 5\nI0814 11:07:08.879991    1880 log.go:172] (0xc000a32420) Data frame received for 5\nI0814 11:07:08.880022    1880 log.go:172] (0xc0007ce000) (5) Data frame handling\nI0814 11:07:08.880041    1880 log.go:172] (0xc0007ce000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0814 11:07:08.946213    1880 log.go:172] (0xc000a32420) Data frame received for 3\nI0814 11:07:08.946248    1880 log.go:172] (0xc0008e46e0) (3) Data frame handling\nI0814 11:07:08.946270    1880 log.go:172] (0xc0008e46e0) (3) Data frame sent\nI0814 11:07:08.946297    1880 log.go:172] (0xc000a32420) Data frame received for 5\nI0814 11:07:08.946313    1880 log.go:172] (0xc0007ce000) (5) Data frame handling\nI0814 11:07:08.946465    1880 log.go:172] (0xc000a32420) Data frame received for 3\nI0814 11:07:08.946498    1880 log.go:172] (0xc0008e46e0) (3) Data frame handling\nI0814 11:07:08.948078    1880 log.go:172] (0xc000a32420) Data frame received for 1\nI0814 11:07:08.948097    1880 log.go:172] (0xc0008e4640) (1) Data frame handling\nI0814 11:07:08.948115    1880 log.go:172] (0xc0008e4640) (1) Data frame sent\nI0814 11:07:08.948131    1880 log.go:172] (0xc000a32420) (0xc0008e4640) Stream removed, broadcasting: 1\nI0814 11:07:08.948148    1880 log.go:172] (0xc000a32420) Go away received\nI0814 11:07:08.948506    1880 log.go:172] (0xc000a32420) (0xc0008e4640) Stream removed, broadcasting: 1\nI0814 11:07:08.948541    1880 log.go:172] (0xc000a32420) (0xc0008e46e0) Stream removed, broadcasting: 3\nI0814 11:07:08.948561    1880 log.go:172] (0xc000a32420) (0xc0007ce000) Stream removed, broadcasting: 5\n"
Aug 14 11:07:08.954: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 14 11:07:08.954: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 14 11:07:19.016: INFO: Waiting for StatefulSet statefulset-3508/ss2 to complete update
Aug 14 11:07:19.016: INFO: Waiting for Pod statefulset-3508/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 14 11:07:19.016: INFO: Waiting for Pod statefulset-3508/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 14 11:07:19.016: INFO: Waiting for Pod statefulset-3508/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 14 11:07:29.023: INFO: Waiting for StatefulSet statefulset-3508/ss2 to complete update
Aug 14 11:07:29.023: INFO: Waiting for Pod statefulset-3508/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 14 11:07:29.023: INFO: Waiting for Pod statefulset-3508/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 14 11:07:39.023: INFO: Waiting for StatefulSet statefulset-3508/ss2 to complete update
Aug 14 11:07:39.023: INFO: Waiting for Pod statefulset-3508/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 14 11:07:39.023: INFO: Waiting for Pod statefulset-3508/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 14 11:07:49.087: INFO: Waiting for StatefulSet statefulset-3508/ss2 to complete update
Aug 14 11:07:49.087: INFO: Waiting for Pod statefulset-3508/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 14 11:07:59.024: INFO: Waiting for StatefulSet statefulset-3508/ss2 to complete update
Aug 14 11:07:59.024: INFO: Waiting for Pod statefulset-3508/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 14 11:08:09.233: INFO: Waiting for StatefulSet statefulset-3508/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 14 11:08:19.023: INFO: Deleting all statefulset in ns statefulset-3508
Aug 14 11:08:19.027: INFO: Scaling statefulset ss2 to 0
Aug 14 11:08:49.394: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 11:08:49.395: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:08:49.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3508" for this suite.
Aug 14 11:09:05.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:09:05.860: INFO: namespace statefulset-3508 deletion completed in 16.362145567s

• [SLOW TEST:250.654 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:09:05.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-08d0420c-c27f-469b-852c-68366c68ea30
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-08d0420c-c27f-469b-852c-68366c68ea30
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:09:18.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1247" for this suite.
Aug 14 11:09:42.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:09:43.320: INFO: namespace projected-1247 deletion completed in 24.753804608s

• [SLOW TEST:37.460 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:09:43.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 14 11:09:44.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3078'
Aug 14 11:09:44.781: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 14 11:09:44.781: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Aug 14 11:09:45.310: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-xj4np]
Aug 14 11:09:45.310: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-xj4np" in namespace "kubectl-3078" to be "running and ready"
Aug 14 11:09:45.895: INFO: Pod "e2e-test-nginx-rc-xj4np": Phase="Pending", Reason="", readiness=false. Elapsed: 584.790765ms
Aug 14 11:09:47.899: INFO: Pod "e2e-test-nginx-rc-xj4np": Phase="Pending", Reason="", readiness=false. Elapsed: 2.589220768s
Aug 14 11:09:50.176: INFO: Pod "e2e-test-nginx-rc-xj4np": Phase="Pending", Reason="", readiness=false. Elapsed: 4.866281785s
Aug 14 11:09:52.200: INFO: Pod "e2e-test-nginx-rc-xj4np": Phase="Pending", Reason="", readiness=false. Elapsed: 6.890210027s
Aug 14 11:09:54.220: INFO: Pod "e2e-test-nginx-rc-xj4np": Phase="Running", Reason="", readiness=true. Elapsed: 8.910351746s
Aug 14 11:09:54.220: INFO: Pod "e2e-test-nginx-rc-xj4np" satisfied condition "running and ready"
Aug 14 11:09:54.220: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-xj4np]
Aug 14 11:09:54.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3078'
Aug 14 11:09:54.385: INFO: stderr: ""
Aug 14 11:09:54.385: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Aug 14 11:09:54.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3078'
Aug 14 11:09:55.062: INFO: stderr: ""
Aug 14 11:09:55.062: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:09:55.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3078" for this suite.
Aug 14 11:10:13.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:10:13.781: INFO: namespace kubectl-3078 deletion completed in 18.679498099s

• [SLOW TEST:30.460 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:10:13.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 14 11:10:27.507: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:10:28.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-885" for this suite.
Aug 14 11:10:39.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:10:39.901: INFO: namespace container-runtime-885 deletion completed in 11.139416773s

• [SLOW TEST:26.120 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:10:39.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 14 11:10:40.079: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:10:54.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6664" for this suite.
Aug 14 11:11:18.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:11:18.952: INFO: namespace init-container-6664 deletion completed in 24.166667108s

• [SLOW TEST:39.050 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:11:18.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 14 11:11:19.493: INFO: Waiting up to 5m0s for pod "pod-5549c10a-815c-4fd0-8fb3-4152c6d3e86e" in namespace "emptydir-5765" to be "success or failure"
Aug 14 11:11:19.700: INFO: Pod "pod-5549c10a-815c-4fd0-8fb3-4152c6d3e86e": Phase="Pending", Reason="", readiness=false. Elapsed: 206.773725ms
Aug 14 11:11:21.704: INFO: Pod "pod-5549c10a-815c-4fd0-8fb3-4152c6d3e86e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211208025s
Aug 14 11:11:23.707: INFO: Pod "pod-5549c10a-815c-4fd0-8fb3-4152c6d3e86e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214387275s
Aug 14 11:11:26.309: INFO: Pod "pod-5549c10a-815c-4fd0-8fb3-4152c6d3e86e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.816637953s
Aug 14 11:11:28.501: INFO: Pod "pod-5549c10a-815c-4fd0-8fb3-4152c6d3e86e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.008464967s
STEP: Saw pod success
Aug 14 11:11:28.501: INFO: Pod "pod-5549c10a-815c-4fd0-8fb3-4152c6d3e86e" satisfied condition "success or failure"
Aug 14 11:11:28.504: INFO: Trying to get logs from node iruya-worker pod pod-5549c10a-815c-4fd0-8fb3-4152c6d3e86e container test-container: 
STEP: delete the pod
Aug 14 11:11:28.849: INFO: Waiting for pod pod-5549c10a-815c-4fd0-8fb3-4152c6d3e86e to disappear
Aug 14 11:11:29.286: INFO: Pod pod-5549c10a-815c-4fd0-8fb3-4152c6d3e86e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:11:29.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5765" for this suite.
Aug 14 11:11:37.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:11:38.466: INFO: namespace emptydir-5765 deletion completed in 9.177395431s

• [SLOW TEST:19.514 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:11:38.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 14 11:11:38.966: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:12:05.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7969" for this suite.
Aug 14 11:12:15.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:12:15.861: INFO: namespace init-container-7969 deletion completed in 10.403564704s

• [SLOW TEST:37.394 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:12:15.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 14 11:12:16.015: INFO: Waiting up to 5m0s for pod "downward-api-256fbb32-cbbd-42d0-be82-fc36af12d98f" in namespace "downward-api-2275" to be "success or failure"
Aug 14 11:12:16.049: INFO: Pod "downward-api-256fbb32-cbbd-42d0-be82-fc36af12d98f": Phase="Pending", Reason="", readiness=false. Elapsed: 33.880821ms
Aug 14 11:12:18.317: INFO: Pod "downward-api-256fbb32-cbbd-42d0-be82-fc36af12d98f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301454338s
Aug 14 11:12:20.321: INFO: Pod "downward-api-256fbb32-cbbd-42d0-be82-fc36af12d98f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305371002s
Aug 14 11:12:22.490: INFO: Pod "downward-api-256fbb32-cbbd-42d0-be82-fc36af12d98f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.474776963s
Aug 14 11:12:24.686: INFO: Pod "downward-api-256fbb32-cbbd-42d0-be82-fc36af12d98f": Phase="Running", Reason="", readiness=true. Elapsed: 8.670843847s
Aug 14 11:12:26.691: INFO: Pod "downward-api-256fbb32-cbbd-42d0-be82-fc36af12d98f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.675321912s
STEP: Saw pod success
Aug 14 11:12:26.691: INFO: Pod "downward-api-256fbb32-cbbd-42d0-be82-fc36af12d98f" satisfied condition "success or failure"
Aug 14 11:12:26.694: INFO: Trying to get logs from node iruya-worker pod downward-api-256fbb32-cbbd-42d0-be82-fc36af12d98f container dapi-container: 
STEP: delete the pod
Aug 14 11:12:26.906: INFO: Waiting for pod downward-api-256fbb32-cbbd-42d0-be82-fc36af12d98f to disappear
Aug 14 11:12:27.223: INFO: Pod downward-api-256fbb32-cbbd-42d0-be82-fc36af12d98f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:12:27.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2275" for this suite.
Aug 14 11:12:36.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:12:36.410: INFO: namespace downward-api-2275 deletion completed in 9.18389163s

• [SLOW TEST:20.550 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:12:36.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-3db3321b-e2e8-49d9-98c7-c749be290437
STEP: Creating a pod to test consume secrets
Aug 14 11:12:36.616: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bfe1bae2-b439-45d1-9854-c1fca6119b06" in namespace "projected-2678" to be "success or failure"
Aug 14 11:12:36.619: INFO: Pod "pod-projected-secrets-bfe1bae2-b439-45d1-9854-c1fca6119b06": Phase="Pending", Reason="", readiness=false. Elapsed: 3.2303ms
Aug 14 11:12:38.898: INFO: Pod "pod-projected-secrets-bfe1bae2-b439-45d1-9854-c1fca6119b06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281890177s
Aug 14 11:12:40.901: INFO: Pod "pod-projected-secrets-bfe1bae2-b439-45d1-9854-c1fca6119b06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285193253s
Aug 14 11:12:42.971: INFO: Pod "pod-projected-secrets-bfe1bae2-b439-45d1-9854-c1fca6119b06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.355429253s
Aug 14 11:12:45.510: INFO: Pod "pod-projected-secrets-bfe1bae2-b439-45d1-9854-c1fca6119b06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.894387249s
STEP: Saw pod success
Aug 14 11:12:45.510: INFO: Pod "pod-projected-secrets-bfe1bae2-b439-45d1-9854-c1fca6119b06" satisfied condition "success or failure"
Aug 14 11:12:45.712: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-bfe1bae2-b439-45d1-9854-c1fca6119b06 container projected-secret-volume-test: 
STEP: delete the pod
Aug 14 11:12:46.115: INFO: Waiting for pod pod-projected-secrets-bfe1bae2-b439-45d1-9854-c1fca6119b06 to disappear
Aug 14 11:12:46.305: INFO: Pod pod-projected-secrets-bfe1bae2-b439-45d1-9854-c1fca6119b06 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:12:46.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2678" for this suite.
Aug 14 11:12:54.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:12:55.369: INFO: namespace projected-2678 deletion completed in 9.060557215s

• [SLOW TEST:18.958 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:12:55.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-418
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 14 11:12:55.743: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 14 11:13:31.100: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.78:8080/dial?request=hostName&protocol=udp&host=10.244.2.26&port=8081&tries=1'] Namespace:pod-network-test-418 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:13:31.100: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:13:31.132036       6 log.go:172] (0xc002ffac60) (0xc001fb4140) Create stream
I0814 11:13:31.132067       6 log.go:172] (0xc002ffac60) (0xc001fb4140) Stream added, broadcasting: 1
I0814 11:13:31.134281       6 log.go:172] (0xc002ffac60) Reply frame received for 1
I0814 11:13:31.134322       6 log.go:172] (0xc002ffac60) (0xc0011f4500) Create stream
I0814 11:13:31.134346       6 log.go:172] (0xc002ffac60) (0xc0011f4500) Stream added, broadcasting: 3
I0814 11:13:31.135444       6 log.go:172] (0xc002ffac60) Reply frame received for 3
I0814 11:13:31.135472       6 log.go:172] (0xc002ffac60) (0xc001fb4280) Create stream
I0814 11:13:31.135482       6 log.go:172] (0xc002ffac60) (0xc001fb4280) Stream added, broadcasting: 5
I0814 11:13:31.136397       6 log.go:172] (0xc002ffac60) Reply frame received for 5
I0814 11:13:31.706120       6 log.go:172] (0xc002ffac60) Data frame received for 3
I0814 11:13:31.706147       6 log.go:172] (0xc0011f4500) (3) Data frame handling
I0814 11:13:31.706165       6 log.go:172] (0xc0011f4500) (3) Data frame sent
I0814 11:13:31.706812       6 log.go:172] (0xc002ffac60) Data frame received for 5
I0814 11:13:31.706840       6 log.go:172] (0xc002ffac60) Data frame received for 3
I0814 11:13:31.706882       6 log.go:172] (0xc0011f4500) (3) Data frame handling
I0814 11:13:31.706939       6 log.go:172] (0xc001fb4280) (5) Data frame handling
I0814 11:13:31.708867       6 log.go:172] (0xc002ffac60) Data frame received for 1
I0814 11:13:31.708964       6 log.go:172] (0xc001fb4140) (1) Data frame handling
I0814 11:13:31.708998       6 log.go:172] (0xc001fb4140) (1) Data frame sent
I0814 11:13:31.709017       6 log.go:172] (0xc002ffac60) (0xc001fb4140) Stream removed, broadcasting: 1
I0814 11:13:31.709039       6 log.go:172] (0xc002ffac60) Go away received
I0814 11:13:31.709196       6 log.go:172] (0xc002ffac60) (0xc001fb4140) Stream removed, broadcasting: 1
I0814 11:13:31.709221       6 log.go:172] (0xc002ffac60) (0xc0011f4500) Stream removed, broadcasting: 3
I0814 11:13:31.709243       6 log.go:172] (0xc002ffac60) (0xc001fb4280) Stream removed, broadcasting: 5
Aug 14 11:13:31.709: INFO: Waiting for endpoints: map[]
Aug 14 11:13:31.730: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.78:8080/dial?request=hostName&protocol=udp&host=10.244.1.77&port=8081&tries=1'] Namespace:pod-network-test-418 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:13:31.730: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:13:31.761940       6 log.go:172] (0xc001d3e4d0) (0xc0009c3220) Create stream
I0814 11:13:31.761971       6 log.go:172] (0xc001d3e4d0) (0xc0009c3220) Stream added, broadcasting: 1
I0814 11:13:31.764154       6 log.go:172] (0xc001d3e4d0) Reply frame received for 1
I0814 11:13:31.764218       6 log.go:172] (0xc001d3e4d0) (0xc0009c32c0) Create stream
I0814 11:13:31.764232       6 log.go:172] (0xc001d3e4d0) (0xc0009c32c0) Stream added, broadcasting: 3
I0814 11:13:31.765039       6 log.go:172] (0xc001d3e4d0) Reply frame received for 3
I0814 11:13:31.765071       6 log.go:172] (0xc001d3e4d0) (0xc001fb4320) Create stream
I0814 11:13:31.765080       6 log.go:172] (0xc001d3e4d0) (0xc001fb4320) Stream added, broadcasting: 5
I0814 11:13:31.765768       6 log.go:172] (0xc001d3e4d0) Reply frame received for 5
I0814 11:13:31.836370       6 log.go:172] (0xc001d3e4d0) Data frame received for 3
I0814 11:13:31.836403       6 log.go:172] (0xc0009c32c0) (3) Data frame handling
I0814 11:13:31.836420       6 log.go:172] (0xc0009c32c0) (3) Data frame sent
I0814 11:13:31.837317       6 log.go:172] (0xc001d3e4d0) Data frame received for 5
I0814 11:13:31.837348       6 log.go:172] (0xc001fb4320) (5) Data frame handling
I0814 11:13:31.837605       6 log.go:172] (0xc001d3e4d0) Data frame received for 3
I0814 11:13:31.837633       6 log.go:172] (0xc0009c32c0) (3) Data frame handling
I0814 11:13:31.838905       6 log.go:172] (0xc001d3e4d0) Data frame received for 1
I0814 11:13:31.838921       6 log.go:172] (0xc0009c3220) (1) Data frame handling
I0814 11:13:31.838930       6 log.go:172] (0xc0009c3220) (1) Data frame sent
I0814 11:13:31.838937       6 log.go:172] (0xc001d3e4d0) (0xc0009c3220) Stream removed, broadcasting: 1
I0814 11:13:31.839009       6 log.go:172] (0xc001d3e4d0) Go away received
I0814 11:13:31.839062       6 log.go:172] (0xc001d3e4d0) (0xc0009c3220) Stream removed, broadcasting: 1
I0814 11:13:31.839084       6 log.go:172] (0xc001d3e4d0) (0xc0009c32c0) Stream removed, broadcasting: 3
I0814 11:13:31.839093       6 log.go:172] (0xc001d3e4d0) (0xc001fb4320) Stream removed, broadcasting: 5
Aug 14 11:13:31.839: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:13:31.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-418" for this suite.
Aug 14 11:13:56.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:13:56.238: INFO: namespace pod-network-test-418 deletion completed in 24.394910483s

• [SLOW TEST:60.870 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:13:56.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-ab702526-f5fa-4c14-a4bb-d52f57eec625
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:13:56.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5961" for this suite.
Aug 14 11:14:02.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:14:02.505: INFO: namespace configmap-5961 deletion completed in 6.102076942s

• [SLOW TEST:6.267 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:14:02.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 14 11:14:02.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2853'
Aug 14 11:14:03.094: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 14 11:14:03.094: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Aug 14 11:14:03.176: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Aug 14 11:14:03.199: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Aug 14 11:14:03.229: INFO: scanned /root for discovery docs: 
Aug 14 11:14:03.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2853'
Aug 14 11:14:27.676: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 14 11:14:27.676: INFO: stdout: "Created e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f\nScaling up e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Aug 14 11:14:27.676: INFO: stdout: "Created e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f\nScaling up e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Aug 14 11:14:27.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2853'
Aug 14 11:14:28.656: INFO: stderr: ""
Aug 14 11:14:28.656: INFO: stdout: "e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f-9v847 "
Aug 14 11:14:28.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f-9v847 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2853'
Aug 14 11:14:28.945: INFO: stderr: ""
Aug 14 11:14:28.945: INFO: stdout: "true"
Aug 14 11:14:28.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f-9v847 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2853'
Aug 14 11:14:29.093: INFO: stderr: ""
Aug 14 11:14:29.093: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Aug 14 11:14:29.093: INFO: e2e-test-nginx-rc-449a651485c2ff4cd3f285289d71451f-9v847 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Aug 14 11:14:29.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2853'
Aug 14 11:14:29.630: INFO: stderr: ""
Aug 14 11:14:29.630: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:14:29.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2853" for this suite.
Aug 14 11:14:54.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:14:54.955: INFO: namespace kubectl-2853 deletion completed in 25.13223423s

• [SLOW TEST:52.449 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:14:54.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:15:01.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8802" for this suite.
Aug 14 11:15:47.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:15:47.554: INFO: namespace kubelet-test-8802 deletion completed in 46.375952477s

• [SLOW TEST:52.599 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:15:47.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 11:15:47.615: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14a71bc2-72d4-4084-98c3-17f75490e14e" in namespace "downward-api-1471" to be "success or failure"
Aug 14 11:15:47.673: INFO: Pod "downwardapi-volume-14a71bc2-72d4-4084-98c3-17f75490e14e": Phase="Pending", Reason="", readiness=false. Elapsed: 57.929739ms
Aug 14 11:15:49.677: INFO: Pod "downwardapi-volume-14a71bc2-72d4-4084-98c3-17f75490e14e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061462085s
Aug 14 11:15:51.682: INFO: Pod "downwardapi-volume-14a71bc2-72d4-4084-98c3-17f75490e14e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066122206s
Aug 14 11:15:54.103: INFO: Pod "downwardapi-volume-14a71bc2-72d4-4084-98c3-17f75490e14e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.487795485s
STEP: Saw pod success
Aug 14 11:15:54.103: INFO: Pod "downwardapi-volume-14a71bc2-72d4-4084-98c3-17f75490e14e" satisfied condition "success or failure"
Aug 14 11:15:54.107: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-14a71bc2-72d4-4084-98c3-17f75490e14e container client-container: 
STEP: delete the pod
Aug 14 11:15:54.602: INFO: Waiting for pod downwardapi-volume-14a71bc2-72d4-4084-98c3-17f75490e14e to disappear
Aug 14 11:15:54.671: INFO: Pod downwardapi-volume-14a71bc2-72d4-4084-98c3-17f75490e14e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:15:54.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1471" for this suite.
Aug 14 11:16:00.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:16:01.021: INFO: namespace downward-api-1471 deletion completed in 6.345976944s

• [SLOW TEST:13.466 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:16:01.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 14 11:16:01.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-424'
Aug 14 11:16:01.177: INFO: stderr: ""
Aug 14 11:16:01.177: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Aug 14 11:16:06.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-424 -o json'
Aug 14 11:16:06.319: INFO: stderr: ""
Aug 14 11:16:06.319: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-14T11:16:01Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-424\",\n        \"resourceVersion\": \"4876571\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-424/pods/e2e-test-nginx-pod\",\n        \"uid\": \"a96ad286-dc2f-4c6b-8f86-eb13e7246259\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-s6prt\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-s6prt\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-s6prt\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-14T11:16:01Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-14T11:16:05Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-14T11:16:05Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-14T11:16:01Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://3c52ad4af09977e637b0eb8cad5e339fd93cff3cc3564fc510be5dcb8b86f999\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-14T11:16:04Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.5\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.81\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-14T11:16:01Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 14 11:16:06.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-424'
Aug 14 11:16:07.263: INFO: stderr: ""
Aug 14 11:16:07.263: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Aug 14 11:16:07.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-424'
Aug 14 11:16:13.618: INFO: stderr: ""
Aug 14 11:16:13.618: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:16:13.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-424" for this suite.
Aug 14 11:16:19.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:16:19.826: INFO: namespace kubectl-424 deletion completed in 6.198116063s

• [SLOW TEST:18.805 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:16:19.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3712
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3712
STEP: Creating statefulset with conflicting port in namespace statefulset-3712
STEP: Waiting until pod test-pod will start running in namespace statefulset-3712
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3712
Aug 14 11:16:26.184: INFO: Observed stateful pod in namespace: statefulset-3712, name: ss-0, uid: 5d23c3da-484c-4303-aaec-6ffbc9714548, status phase: Failed. Waiting for statefulset controller to delete.
Aug 14 11:16:26.637: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3712
STEP: Removing pod with conflicting port in namespace statefulset-3712
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3712 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 14 11:16:33.270: INFO: Deleting all statefulset in ns statefulset-3712
Aug 14 11:16:33.272: INFO: Scaling statefulset ss to 0
Aug 14 11:16:53.455: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 11:16:53.457: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:16:53.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3712" for this suite.
Aug 14 11:16:59.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:16:59.605: INFO: namespace statefulset-3712 deletion completed in 6.10381868s

• [SLOW TEST:39.779 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:16:59.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 11:17:00.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:17:09.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7899" for this suite.
Aug 14 11:18:01.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:18:01.936: INFO: namespace pods-7899 deletion completed in 52.209610662s

• [SLOW TEST:62.331 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:18:01.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 14 11:18:02.558: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:02.729: INFO: Number of nodes with available pods: 0
Aug 14 11:18:02.729: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:18:03.733: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:03.736: INFO: Number of nodes with available pods: 0
Aug 14 11:18:03.736: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:18:04.753: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:04.756: INFO: Number of nodes with available pods: 0
Aug 14 11:18:04.756: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:18:05.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:05.745: INFO: Number of nodes with available pods: 0
Aug 14 11:18:05.745: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:18:07.097: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:07.658: INFO: Number of nodes with available pods: 0
Aug 14 11:18:07.658: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:18:07.891: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:07.908: INFO: Number of nodes with available pods: 0
Aug 14 11:18:07.908: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:18:08.825: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:08.828: INFO: Number of nodes with available pods: 0
Aug 14 11:18:08.828: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:18:10.246: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:10.460: INFO: Number of nodes with available pods: 1
Aug 14 11:18:10.460: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:11.539: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:12.029: INFO: Number of nodes with available pods: 2
Aug 14 11:18:12.029: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 14 11:18:12.805: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:13.149: INFO: Number of nodes with available pods: 1
Aug 14 11:18:13.149: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:14.705: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:14.708: INFO: Number of nodes with available pods: 1
Aug 14 11:18:14.708: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:15.153: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:15.155: INFO: Number of nodes with available pods: 1
Aug 14 11:18:15.155: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:16.186: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:16.208: INFO: Number of nodes with available pods: 1
Aug 14 11:18:16.209: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:17.161: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:17.550: INFO: Number of nodes with available pods: 1
Aug 14 11:18:17.550: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:18.347: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:18.556: INFO: Number of nodes with available pods: 1
Aug 14 11:18:18.556: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:19.154: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:19.158: INFO: Number of nodes with available pods: 1
Aug 14 11:18:19.158: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:20.153: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:20.156: INFO: Number of nodes with available pods: 1
Aug 14 11:18:20.156: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:21.154: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:21.156: INFO: Number of nodes with available pods: 1
Aug 14 11:18:21.156: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:22.155: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:22.158: INFO: Number of nodes with available pods: 1
Aug 14 11:18:22.158: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:23.678: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:23.765: INFO: Number of nodes with available pods: 1
Aug 14 11:18:23.765: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:24.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:24.430: INFO: Number of nodes with available pods: 1
Aug 14 11:18:24.430: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:25.167: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:25.170: INFO: Number of nodes with available pods: 1
Aug 14 11:18:25.170: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:26.154: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:26.157: INFO: Number of nodes with available pods: 1
Aug 14 11:18:26.157: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:27.263: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:27.724: INFO: Number of nodes with available pods: 1
Aug 14 11:18:27.724: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:28.353: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:28.699: INFO: Number of nodes with available pods: 1
Aug 14 11:18:28.699: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:29.365: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:29.431: INFO: Number of nodes with available pods: 1
Aug 14 11:18:29.431: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:30.153: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:30.155: INFO: Number of nodes with available pods: 1
Aug 14 11:18:30.155: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:31.264: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:31.268: INFO: Number of nodes with available pods: 1
Aug 14 11:18:31.268: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:32.153: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:32.156: INFO: Number of nodes with available pods: 1
Aug 14 11:18:32.156: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:18:33.161: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:18:33.163: INFO: Number of nodes with available pods: 2
Aug 14 11:18:33.163: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2187, will wait for the garbage collector to delete the pods
Aug 14 11:18:33.224: INFO: Deleting DaemonSet.extensions daemon-set took: 6.329416ms
Aug 14 11:18:33.524: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.208686ms
Aug 14 11:18:38.729: INFO: Number of nodes with available pods: 0
Aug 14 11:18:38.729: INFO: Number of running nodes: 0, number of available pods: 0
Aug 14 11:18:38.731: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2187/daemonsets","resourceVersion":"4877096"},"items":null}

Aug 14 11:18:38.771: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2187/pods","resourceVersion":"4877097"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:18:38.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2187" for this suite.
Aug 14 11:18:49.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:18:49.093: INFO: namespace daemonsets-2187 deletion completed in 10.307493496s

• [SLOW TEST:47.157 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:18:49.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-679e03c8-80bd-4068-b06e-471ec05a78ef
STEP: Creating a pod to test consume secrets
Aug 14 11:18:49.519: INFO: Waiting up to 5m0s for pod "pod-secrets-6593971e-c7db-4c23-be87-3f550756ee65" in namespace "secrets-7502" to be "success or failure"
Aug 14 11:18:49.610: INFO: Pod "pod-secrets-6593971e-c7db-4c23-be87-3f550756ee65": Phase="Pending", Reason="", readiness=false. Elapsed: 91.08432ms
Aug 14 11:18:51.614: INFO: Pod "pod-secrets-6593971e-c7db-4c23-be87-3f550756ee65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094728838s
Aug 14 11:18:53.618: INFO: Pod "pod-secrets-6593971e-c7db-4c23-be87-3f550756ee65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098749744s
STEP: Saw pod success
Aug 14 11:18:53.618: INFO: Pod "pod-secrets-6593971e-c7db-4c23-be87-3f550756ee65" satisfied condition "success or failure"
Aug 14 11:18:53.621: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-6593971e-c7db-4c23-be87-3f550756ee65 container secret-volume-test: 
STEP: delete the pod
Aug 14 11:18:53.695: INFO: Waiting for pod pod-secrets-6593971e-c7db-4c23-be87-3f550756ee65 to disappear
Aug 14 11:18:53.718: INFO: Pod pod-secrets-6593971e-c7db-4c23-be87-3f550756ee65 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:18:53.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7502" for this suite.
Aug 14 11:18:59.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:18:59.900: INFO: namespace secrets-7502 deletion completed in 6.178248857s
STEP: Destroying namespace "secret-namespace-8700" for this suite.
Aug 14 11:19:05.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:19:06.034: INFO: namespace secret-namespace-8700 deletion completed in 6.13315044s

• [SLOW TEST:16.941 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:19:06.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 14 11:19:06.094: INFO: Waiting up to 5m0s for pod "downward-api-c33a9c57-d3a2-4c78-9e3b-867b161bbeb6" in namespace "downward-api-2756" to be "success or failure"
Aug 14 11:19:06.096: INFO: Pod "downward-api-c33a9c57-d3a2-4c78-9e3b-867b161bbeb6": Phase="Pending", Reason="", readiness=false. Elapsed: 1.966142ms
Aug 14 11:19:08.629: INFO: Pod "downward-api-c33a9c57-d3a2-4c78-9e3b-867b161bbeb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.535361821s
Aug 14 11:19:11.006: INFO: Pod "downward-api-c33a9c57-d3a2-4c78-9e3b-867b161bbeb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.911929721s
Aug 14 11:19:13.010: INFO: Pod "downward-api-c33a9c57-d3a2-4c78-9e3b-867b161bbeb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.916094823s
STEP: Saw pod success
Aug 14 11:19:13.010: INFO: Pod "downward-api-c33a9c57-d3a2-4c78-9e3b-867b161bbeb6" satisfied condition "success or failure"
Aug 14 11:19:13.013: INFO: Trying to get logs from node iruya-worker pod downward-api-c33a9c57-d3a2-4c78-9e3b-867b161bbeb6 container dapi-container: 
STEP: delete the pod
Aug 14 11:19:14.823: INFO: Waiting for pod downward-api-c33a9c57-d3a2-4c78-9e3b-867b161bbeb6 to disappear
Aug 14 11:19:15.251: INFO: Pod downward-api-c33a9c57-d3a2-4c78-9e3b-867b161bbeb6 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:19:15.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2756" for this suite.
Aug 14 11:19:23.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:19:23.395: INFO: namespace downward-api-2756 deletion completed in 8.140245587s

• [SLOW TEST:17.361 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:19:23.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 14 11:19:26.062: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:19:45.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1806" for this suite.
Aug 14 11:19:53.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:19:53.364: INFO: namespace pods-1806 deletion completed in 8.226031179s

• [SLOW TEST:29.968 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:19:53.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-965
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 14 11:19:53.513: INFO: Found 0 stateful pods, waiting for 3
Aug 14 11:20:03.518: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:20:03.518: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:20:03.518: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 14 11:20:13.685: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:20:13.685: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:20:13.685: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 14 11:20:13.711: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 14 11:20:24.281: INFO: Updating stateful set ss2
Aug 14 11:20:24.287: INFO: Waiting for Pod statefulset-965/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Aug 14 11:20:35.162: INFO: Found 2 stateful pods, waiting for 3
Aug 14 11:20:45.293: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:20:45.293: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:20:45.293: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 14 11:20:55.165: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:20:55.165: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:20:55.165: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 14 11:20:55.187: INFO: Updating stateful set ss2
Aug 14 11:20:55.220: INFO: Waiting for Pod statefulset-965/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 14 11:21:15.737: INFO: Updating stateful set ss2
Aug 14 11:21:15.831: INFO: Waiting for StatefulSet statefulset-965/ss2 to complete update
Aug 14 11:21:15.831: INFO: Waiting for Pod statefulset-965/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 14 11:21:26.042: INFO: Waiting for StatefulSet statefulset-965/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 14 11:21:35.869: INFO: Deleting all statefulset in ns statefulset-965
Aug 14 11:21:35.872: INFO: Scaling statefulset ss2 to 0
Aug 14 11:21:55.930: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 11:21:55.933: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:21:55.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-965" for this suite.
Aug 14 11:22:05.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:22:06.048: INFO: namespace statefulset-965 deletion completed in 10.082926679s

• [SLOW TEST:132.684 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:22:06.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 14 11:22:06.191: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 14 11:22:06.290: INFO: Waiting for terminating namespaces to be deleted...
Aug 14 11:22:06.297: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 14 11:22:06.376: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 14 11:22:06.376: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 14 11:22:06.376: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 14 11:22:06.376: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 14 11:22:06.376: INFO: sprout-686cc64cfb-smjks from ims-p7dpm started at 2020-08-13 08:25:21 +0000 UTC (2 container statuses recorded)
Aug 14 11:22:06.376: INFO: 	Container sprout ready: false, restart count 0
Aug 14 11:22:06.376: INFO: 	Container tailer ready: false, restart count 0
Aug 14 11:22:06.376: INFO: homestead-prov-756c8bff5d-d6lsl from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (1 container statuses recorded)
Aug 14 11:22:06.376: INFO: 	Container homestead-prov ready: false, restart count 0
Aug 14 11:22:06.376: INFO: etcd-5cbf55c8c-k46jp from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (1 container statuses recorded)
Aug 14 11:22:06.376: INFO: 	Container etcd ready: true, restart count 0
Aug 14 11:22:06.376: INFO: cassandra-76f5c4d86c-h2nwg from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (1 container statuses recorded)
Aug 14 11:22:06.376: INFO: 	Container cassandra ready: true, restart count 0
Aug 14 11:22:06.376: INFO: homer-74dd4556d9-ws825 from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (1 container statuses recorded)
Aug 14 11:22:06.376: INFO: 	Container homer ready: true, restart count 0
Aug 14 11:22:06.376: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 14 11:22:06.384: INFO: ellis-57b84b6dd7-xv7nx from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (1 container statuses recorded)
Aug 14 11:22:06.384: INFO: 	Container ellis ready: false, restart count 0
Aug 14 11:22:06.384: INFO: bono-5cdb7bfcdd-rq8q2 from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (2 container statuses recorded)
Aug 14 11:22:06.384: INFO: 	Container bono ready: false, restart count 0
Aug 14 11:22:06.384: INFO: 	Container tailer ready: false, restart count 0
Aug 14 11:22:06.384: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded)
Aug 14 11:22:06.384: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 14 11:22:06.384: INFO: homestead-57586d6cdc-g8qmw from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (2 container statuses recorded)
Aug 14 11:22:06.384: INFO: 	Container homestead ready: false, restart count 390
Aug 14 11:22:06.384: INFO: 	Container tailer ready: true, restart count 0
Aug 14 11:22:06.384: INFO: ralf-57c4654cb8-sctv6 from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (2 container statuses recorded)
Aug 14 11:22:06.384: INFO: 	Container ralf ready: true, restart count 0
Aug 14 11:22:06.384: INFO: 	Container tailer ready: true, restart count 0
Aug 14 11:22:06.384: INFO: astaire-5ddcdd6b7f-hppqv from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (2 container statuses recorded)
Aug 14 11:22:06.385: INFO: 	Container astaire ready: true, restart count 0
Aug 14 11:22:06.385: INFO: 	Container tailer ready: true, restart count 0
Aug 14 11:22:06.385: INFO: chronos-687b9884c5-g8mpr from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (2 container statuses recorded)
Aug 14 11:22:06.385: INFO: 	Container chronos ready: true, restart count 0
Aug 14 11:22:06.385: INFO: 	Container tailer ready: true, restart count 0
Aug 14 11:22:06.385: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded)
Aug 14 11:22:06.385: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8d1f822c-ad6a-4070-92e8-a09b1fd42453 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-8d1f822c-ad6a-4070-92e8-a09b1fd42453 off the node iruya-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8d1f822c-ad6a-4070-92e8-a09b1fd42453
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:22:26.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9716" for this suite.
Aug 14 11:23:05.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:23:05.786: INFO: namespace sched-pred-9716 deletion completed in 38.797475519s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:59.739 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:23:05.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Aug 14 11:23:06.557: INFO: Waiting up to 5m0s for pod "var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d" in namespace "var-expansion-330" to be "success or failure"
Aug 14 11:23:07.243: INFO: Pod "var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d": Phase="Pending", Reason="", readiness=false. Elapsed: 686.427479ms
Aug 14 11:23:09.247: INFO: Pod "var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.69024633s
Aug 14 11:23:11.251: INFO: Pod "var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.69454115s
Aug 14 11:23:13.255: INFO: Pod "var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.698049256s
Aug 14 11:23:15.320: INFO: Pod "var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.763747476s
Aug 14 11:23:17.351: INFO: Pod "var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d": Phase="Running", Reason="", readiness=true. Elapsed: 10.794445944s
Aug 14 11:23:19.354: INFO: Pod "var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.797732952s
STEP: Saw pod success
Aug 14 11:23:19.354: INFO: Pod "var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d" satisfied condition "success or failure"
Aug 14 11:23:19.356: INFO: Trying to get logs from node iruya-worker pod var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d container dapi-container: 
STEP: delete the pod
Aug 14 11:23:20.055: INFO: Waiting for pod var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d to disappear
Aug 14 11:23:20.273: INFO: Pod var-expansion-b491f520-3604-4bec-9f64-ad041d7cf40d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:23:20.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-330" for this suite.
Aug 14 11:23:26.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:23:26.677: INFO: namespace var-expansion-330 deletion completed in 6.400004787s

• [SLOW TEST:20.890 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:23:26.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 14 11:23:27.423: INFO: Waiting up to 5m0s for pod "pod-47cb0bae-1786-4c27-81be-9fe62b26fd1b" in namespace "emptydir-21" to be "success or failure"
Aug 14 11:23:27.425: INFO: Pod "pod-47cb0bae-1786-4c27-81be-9fe62b26fd1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470249ms
Aug 14 11:23:29.677: INFO: Pod "pod-47cb0bae-1786-4c27-81be-9fe62b26fd1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254563614s
Aug 14 11:23:32.040: INFO: Pod "pod-47cb0bae-1786-4c27-81be-9fe62b26fd1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.616814891s
Aug 14 11:23:34.226: INFO: Pod "pod-47cb0bae-1786-4c27-81be-9fe62b26fd1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.802914579s
Aug 14 11:23:36.231: INFO: Pod "pod-47cb0bae-1786-4c27-81be-9fe62b26fd1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.808193601s
STEP: Saw pod success
Aug 14 11:23:36.231: INFO: Pod "pod-47cb0bae-1786-4c27-81be-9fe62b26fd1b" satisfied condition "success or failure"
Aug 14 11:23:36.234: INFO: Trying to get logs from node iruya-worker2 pod pod-47cb0bae-1786-4c27-81be-9fe62b26fd1b container test-container: 
STEP: delete the pod
Aug 14 11:23:37.102: INFO: Waiting for pod pod-47cb0bae-1786-4c27-81be-9fe62b26fd1b to disappear
Aug 14 11:23:37.363: INFO: Pod pod-47cb0bae-1786-4c27-81be-9fe62b26fd1b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:23:37.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-21" for this suite.
Aug 14 11:23:45.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:23:46.248: INFO: namespace emptydir-21 deletion completed in 8.881426738s

• [SLOW TEST:19.570 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:23:46.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 14 11:24:12.938: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8246 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:24:12.938: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:24:13.011004       6 log.go:172] (0xc003010f20) (0xc002c795e0) Create stream
I0814 11:24:13.011046       6 log.go:172] (0xc003010f20) (0xc002c795e0) Stream added, broadcasting: 1
I0814 11:24:13.019521       6 log.go:172] (0xc003010f20) Reply frame received for 1
I0814 11:24:13.019587       6 log.go:172] (0xc003010f20) (0xc0018c2780) Create stream
I0814 11:24:13.019603       6 log.go:172] (0xc003010f20) (0xc0018c2780) Stream added, broadcasting: 3
I0814 11:24:13.021318       6 log.go:172] (0xc003010f20) Reply frame received for 3
I0814 11:24:13.021354       6 log.go:172] (0xc003010f20) (0xc002c79680) Create stream
I0814 11:24:13.021367       6 log.go:172] (0xc003010f20) (0xc002c79680) Stream added, broadcasting: 5
I0814 11:24:13.022467       6 log.go:172] (0xc003010f20) Reply frame received for 5
I0814 11:24:13.097255       6 log.go:172] (0xc003010f20) Data frame received for 5
I0814 11:24:13.097289       6 log.go:172] (0xc002c79680) (5) Data frame handling
I0814 11:24:13.097357       6 log.go:172] (0xc003010f20) Data frame received for 3
I0814 11:24:13.097399       6 log.go:172] (0xc0018c2780) (3) Data frame handling
I0814 11:24:13.097425       6 log.go:172] (0xc0018c2780) (3) Data frame sent
I0814 11:24:13.097443       6 log.go:172] (0xc003010f20) Data frame received for 3
I0814 11:24:13.097460       6 log.go:172] (0xc0018c2780) (3) Data frame handling
I0814 11:24:13.098862       6 log.go:172] (0xc003010f20) Data frame received for 1
I0814 11:24:13.098892       6 log.go:172] (0xc002c795e0) (1) Data frame handling
I0814 11:24:13.098908       6 log.go:172] (0xc002c795e0) (1) Data frame sent
I0814 11:24:13.098919       6 log.go:172] (0xc003010f20) (0xc002c795e0) Stream removed, broadcasting: 1
I0814 11:24:13.098935       6 log.go:172] (0xc003010f20) Go away received
I0814 11:24:13.099147       6 log.go:172] (0xc003010f20) (0xc002c795e0) Stream removed, broadcasting: 1
I0814 11:24:13.099183       6 log.go:172] (0xc003010f20) (0xc0018c2780) Stream removed, broadcasting: 3
I0814 11:24:13.099199       6 log.go:172] (0xc003010f20) (0xc002c79680) Stream removed, broadcasting: 5
Aug 14 11:24:13.099: INFO: Exec stderr: ""
Aug 14 11:24:13.099: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8246 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:24:13.099: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:24:13.145377       6 log.go:172] (0xc001e9a840) (0xc0020603c0) Create stream
I0814 11:24:13.145435       6 log.go:172] (0xc001e9a840) (0xc0020603c0) Stream added, broadcasting: 1
I0814 11:24:13.147781       6 log.go:172] (0xc001e9a840) Reply frame received for 1
I0814 11:24:13.147806       6 log.go:172] (0xc001e9a840) (0xc0018c2820) Create stream
I0814 11:24:13.147816       6 log.go:172] (0xc001e9a840) (0xc0018c2820) Stream added, broadcasting: 3
I0814 11:24:13.149117       6 log.go:172] (0xc001e9a840) Reply frame received for 3
I0814 11:24:13.149138       6 log.go:172] (0xc001e9a840) (0xc002060460) Create stream
I0814 11:24:13.149148       6 log.go:172] (0xc001e9a840) (0xc002060460) Stream added, broadcasting: 5
I0814 11:24:13.149963       6 log.go:172] (0xc001e9a840) Reply frame received for 5
I0814 11:24:13.202706       6 log.go:172] (0xc001e9a840) Data frame received for 5
I0814 11:24:13.202752       6 log.go:172] (0xc002060460) (5) Data frame handling
I0814 11:24:13.202771       6 log.go:172] (0xc001e9a840) Data frame received for 3
I0814 11:24:13.202783       6 log.go:172] (0xc0018c2820) (3) Data frame handling
I0814 11:24:13.202801       6 log.go:172] (0xc0018c2820) (3) Data frame sent
I0814 11:24:13.202811       6 log.go:172] (0xc001e9a840) Data frame received for 3
I0814 11:24:13.202819       6 log.go:172] (0xc0018c2820) (3) Data frame handling
I0814 11:24:13.203513       6 log.go:172] (0xc001e9a840) Data frame received for 1
I0814 11:24:13.203540       6 log.go:172] (0xc0020603c0) (1) Data frame handling
I0814 11:24:13.203561       6 log.go:172] (0xc0020603c0) (1) Data frame sent
I0814 11:24:13.203581       6 log.go:172] (0xc001e9a840) (0xc0020603c0) Stream removed, broadcasting: 1
I0814 11:24:13.203606       6 log.go:172] (0xc001e9a840) Go away received
I0814 11:24:13.203768       6 log.go:172] (0xc001e9a840) (0xc0020603c0) Stream removed, broadcasting: 1
I0814 11:24:13.203805       6 log.go:172] (0xc001e9a840) (0xc0018c2820) Stream removed, broadcasting: 3
I0814 11:24:13.203816       6 log.go:172] (0xc001e9a840) (0xc002060460) Stream removed, broadcasting: 5
Aug 14 11:24:13.203: INFO: Exec stderr: ""
Aug 14 11:24:13.203: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8246 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:24:13.203: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:24:13.229599       6 log.go:172] (0xc003011ad0) (0xc002c799a0) Create stream
I0814 11:24:13.229623       6 log.go:172] (0xc003011ad0) (0xc002c799a0) Stream added, broadcasting: 1
I0814 11:24:13.231556       6 log.go:172] (0xc003011ad0) Reply frame received for 1
I0814 11:24:13.231606       6 log.go:172] (0xc003011ad0) (0xc0018c28c0) Create stream
I0814 11:24:13.231620       6 log.go:172] (0xc003011ad0) (0xc0018c28c0) Stream added, broadcasting: 3
I0814 11:24:13.232352       6 log.go:172] (0xc003011ad0) Reply frame received for 3
I0814 11:24:13.232383       6 log.go:172] (0xc003011ad0) (0xc002c79a40) Create stream
I0814 11:24:13.232391       6 log.go:172] (0xc003011ad0) (0xc002c79a40) Stream added, broadcasting: 5
I0814 11:24:13.233099       6 log.go:172] (0xc003011ad0) Reply frame received for 5
I0814 11:24:13.289180       6 log.go:172] (0xc003011ad0) Data frame received for 5
I0814 11:24:13.289219       6 log.go:172] (0xc002c79a40) (5) Data frame handling
I0814 11:24:13.289246       6 log.go:172] (0xc003011ad0) Data frame received for 3
I0814 11:24:13.289260       6 log.go:172] (0xc0018c28c0) (3) Data frame handling
I0814 11:24:13.289276       6 log.go:172] (0xc0018c28c0) (3) Data frame sent
I0814 11:24:13.289288       6 log.go:172] (0xc003011ad0) Data frame received for 3
I0814 11:24:13.289298       6 log.go:172] (0xc0018c28c0) (3) Data frame handling
I0814 11:24:13.290124       6 log.go:172] (0xc003011ad0) Data frame received for 1
I0814 11:24:13.290136       6 log.go:172] (0xc002c799a0) (1) Data frame handling
I0814 11:24:13.290149       6 log.go:172] (0xc002c799a0) (1) Data frame sent
I0814 11:24:13.290260       6 log.go:172] (0xc003011ad0) (0xc002c799a0) Stream removed, broadcasting: 1
I0814 11:24:13.290425       6 log.go:172] (0xc003011ad0) (0xc002c799a0) Stream removed, broadcasting: 1
I0814 11:24:13.290456       6 log.go:172] (0xc003011ad0) (0xc0018c28c0) Stream removed, broadcasting: 3
I0814 11:24:13.290485       6 log.go:172] (0xc003011ad0) (0xc002c79a40) Stream removed, broadcasting: 5
Aug 14 11:24:13.290: INFO: Exec stderr: ""
I0814 11:24:13.290534       6 log.go:172] (0xc003011ad0) Go away received
Aug 14 11:24:13.290: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8246 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:24:13.290: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:24:13.323530       6 log.go:172] (0xc000e728f0) (0xc00290a8c0) Create stream
I0814 11:24:13.323570       6 log.go:172] (0xc000e728f0) (0xc00290a8c0) Stream added, broadcasting: 1
I0814 11:24:13.326097       6 log.go:172] (0xc000e728f0) Reply frame received for 1
I0814 11:24:13.326133       6 log.go:172] (0xc000e728f0) (0xc002060500) Create stream
I0814 11:24:13.326141       6 log.go:172] (0xc000e728f0) (0xc002060500) Stream added, broadcasting: 3
I0814 11:24:13.326975       6 log.go:172] (0xc000e728f0) Reply frame received for 3
I0814 11:24:13.327012       6 log.go:172] (0xc000e728f0) (0xc002a87ae0) Create stream
I0814 11:24:13.327024       6 log.go:172] (0xc000e728f0) (0xc002a87ae0) Stream added, broadcasting: 5
I0814 11:24:13.327704       6 log.go:172] (0xc000e728f0) Reply frame received for 5
I0814 11:24:13.405726       6 log.go:172] (0xc000e728f0) Data frame received for 3
I0814 11:24:13.405805       6 log.go:172] (0xc002060500) (3) Data frame handling
I0814 11:24:13.405864       6 log.go:172] (0xc002060500) (3) Data frame sent
I0814 11:24:13.405906       6 log.go:172] (0xc000e728f0) Data frame received for 3
I0814 11:24:13.405922       6 log.go:172] (0xc002060500) (3) Data frame handling
I0814 11:24:13.405950       6 log.go:172] (0xc000e728f0) Data frame received for 5
I0814 11:24:13.406002       6 log.go:172] (0xc002a87ae0) (5) Data frame handling
I0814 11:24:13.406986       6 log.go:172] (0xc000e728f0) Data frame received for 1
I0814 11:24:13.407040       6 log.go:172] (0xc00290a8c0) (1) Data frame handling
I0814 11:24:13.407098       6 log.go:172] (0xc00290a8c0) (1) Data frame sent
I0814 11:24:13.407124       6 log.go:172] (0xc000e728f0) (0xc00290a8c0) Stream removed, broadcasting: 1
I0814 11:24:13.407147       6 log.go:172] (0xc000e728f0) Go away received
I0814 11:24:13.407245       6 log.go:172] (0xc000e728f0) (0xc00290a8c0) Stream removed, broadcasting: 1
I0814 11:24:13.407258       6 log.go:172] (0xc000e728f0) (0xc002060500) Stream removed, broadcasting: 3
I0814 11:24:13.407266       6 log.go:172] (0xc000e728f0) (0xc002a87ae0) Stream removed, broadcasting: 5
Aug 14 11:24:13.407: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 14 11:24:13.407: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8246 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:24:13.407: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:24:13.754530       6 log.go:172] (0xc000f46fd0) (0xc002a87e00) Create stream
I0814 11:24:13.754551       6 log.go:172] (0xc000f46fd0) (0xc002a87e00) Stream added, broadcasting: 1
I0814 11:24:13.756296       6 log.go:172] (0xc000f46fd0) Reply frame received for 1
I0814 11:24:13.756320       6 log.go:172] (0xc000f46fd0) (0xc0018c2a00) Create stream
I0814 11:24:13.756327       6 log.go:172] (0xc000f46fd0) (0xc0018c2a00) Stream added, broadcasting: 3
I0814 11:24:13.757080       6 log.go:172] (0xc000f46fd0) Reply frame received for 3
I0814 11:24:13.757114       6 log.go:172] (0xc000f46fd0) (0xc0018c2aa0) Create stream
I0814 11:24:13.757121       6 log.go:172] (0xc000f46fd0) (0xc0018c2aa0) Stream added, broadcasting: 5
I0814 11:24:13.757698       6 log.go:172] (0xc000f46fd0) Reply frame received for 5
I0814 11:24:13.826789       6 log.go:172] (0xc000f46fd0) Data frame received for 5
I0814 11:24:13.826824       6 log.go:172] (0xc0018c2aa0) (5) Data frame handling
I0814 11:24:13.826849       6 log.go:172] (0xc000f46fd0) Data frame received for 3
I0814 11:24:13.826859       6 log.go:172] (0xc0018c2a00) (3) Data frame handling
I0814 11:24:13.826867       6 log.go:172] (0xc0018c2a00) (3) Data frame sent
I0814 11:24:13.826875       6 log.go:172] (0xc000f46fd0) Data frame received for 3
I0814 11:24:13.826880       6 log.go:172] (0xc0018c2a00) (3) Data frame handling
I0814 11:24:13.827926       6 log.go:172] (0xc000f46fd0) Data frame received for 1
I0814 11:24:13.827962       6 log.go:172] (0xc002a87e00) (1) Data frame handling
I0814 11:24:13.827991       6 log.go:172] (0xc002a87e00) (1) Data frame sent
I0814 11:24:13.828003       6 log.go:172] (0xc000f46fd0) (0xc002a87e00) Stream removed, broadcasting: 1
I0814 11:24:13.828014       6 log.go:172] (0xc000f46fd0) Go away received
I0814 11:24:13.828155       6 log.go:172] (0xc000f46fd0) (0xc002a87e00) Stream removed, broadcasting: 1
I0814 11:24:13.828173       6 log.go:172] (0xc000f46fd0) (0xc0018c2a00) Stream removed, broadcasting: 3
I0814 11:24:13.828185       6 log.go:172] (0xc000f46fd0) (0xc0018c2aa0) Stream removed, broadcasting: 5
Aug 14 11:24:13.828: INFO: Exec stderr: ""
Aug 14 11:24:13.828: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8246 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:24:13.828: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:24:13.877379       6 log.go:172] (0xc001e9bce0) (0xc0020606e0) Create stream
I0814 11:24:13.877406       6 log.go:172] (0xc001e9bce0) (0xc0020606e0) Stream added, broadcasting: 1
I0814 11:24:13.879889       6 log.go:172] (0xc001e9bce0) Reply frame received for 1
I0814 11:24:13.879957       6 log.go:172] (0xc001e9bce0) (0xc002060780) Create stream
I0814 11:24:13.879989       6 log.go:172] (0xc001e9bce0) (0xc002060780) Stream added, broadcasting: 3
I0814 11:24:13.881043       6 log.go:172] (0xc001e9bce0) Reply frame received for 3
I0814 11:24:13.881069       6 log.go:172] (0xc001e9bce0) (0xc00290ab40) Create stream
I0814 11:24:13.881078       6 log.go:172] (0xc001e9bce0) (0xc00290ab40) Stream added, broadcasting: 5
I0814 11:24:13.882076       6 log.go:172] (0xc001e9bce0) Reply frame received for 5
I0814 11:24:13.941224       6 log.go:172] (0xc001e9bce0) Data frame received for 5
I0814 11:24:13.941264       6 log.go:172] (0xc00290ab40) (5) Data frame handling
I0814 11:24:13.941285       6 log.go:172] (0xc001e9bce0) Data frame received for 3
I0814 11:24:13.941296       6 log.go:172] (0xc002060780) (3) Data frame handling
I0814 11:24:13.941308       6 log.go:172] (0xc002060780) (3) Data frame sent
I0814 11:24:13.941318       6 log.go:172] (0xc001e9bce0) Data frame received for 3
I0814 11:24:13.941327       6 log.go:172] (0xc002060780) (3) Data frame handling
I0814 11:24:13.942410       6 log.go:172] (0xc001e9bce0) Data frame received for 1
I0814 11:24:13.942427       6 log.go:172] (0xc0020606e0) (1) Data frame handling
I0814 11:24:13.942437       6 log.go:172] (0xc0020606e0) (1) Data frame sent
I0814 11:24:13.942450       6 log.go:172] (0xc001e9bce0) (0xc0020606e0) Stream removed, broadcasting: 1
I0814 11:24:13.942479       6 log.go:172] (0xc001e9bce0) Go away received
I0814 11:24:13.942647       6 log.go:172] (0xc001e9bce0) (0xc0020606e0) Stream removed, broadcasting: 1
I0814 11:24:13.942670       6 log.go:172] (0xc001e9bce0) (0xc002060780) Stream removed, broadcasting: 3
I0814 11:24:13.942684       6 log.go:172] (0xc001e9bce0) (0xc00290ab40) Stream removed, broadcasting: 5
Aug 14 11:24:13.942: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 14 11:24:13.942: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8246 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:24:13.942: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:24:13.968697       6 log.go:172] (0xc000e73c30) (0xc00290ae60) Create stream
I0814 11:24:13.968803       6 log.go:172] (0xc000e73c30) (0xc00290ae60) Stream added, broadcasting: 1
I0814 11:24:13.971848       6 log.go:172] (0xc000e73c30) Reply frame received for 1
I0814 11:24:13.971890       6 log.go:172] (0xc000e73c30) (0xc002a87ea0) Create stream
I0814 11:24:13.971910       6 log.go:172] (0xc000e73c30) (0xc002a87ea0) Stream added, broadcasting: 3
I0814 11:24:13.973062       6 log.go:172] (0xc000e73c30) Reply frame received for 3
I0814 11:24:13.973088       6 log.go:172] (0xc000e73c30) (0xc002060820) Create stream
I0814 11:24:13.973105       6 log.go:172] (0xc000e73c30) (0xc002060820) Stream added, broadcasting: 5
I0814 11:24:13.974064       6 log.go:172] (0xc000e73c30) Reply frame received for 5
I0814 11:24:14.028507       6 log.go:172] (0xc000e73c30) Data frame received for 3
I0814 11:24:14.028535       6 log.go:172] (0xc002a87ea0) (3) Data frame handling
I0814 11:24:14.028543       6 log.go:172] (0xc002a87ea0) (3) Data frame sent
I0814 11:24:14.028548       6 log.go:172] (0xc000e73c30) Data frame received for 3
I0814 11:24:14.028555       6 log.go:172] (0xc002a87ea0) (3) Data frame handling
I0814 11:24:14.028632       6 log.go:172] (0xc000e73c30) Data frame received for 5
I0814 11:24:14.028662       6 log.go:172] (0xc002060820) (5) Data frame handling
I0814 11:24:14.029620       6 log.go:172] (0xc000e73c30) Data frame received for 1
I0814 11:24:14.029637       6 log.go:172] (0xc00290ae60) (1) Data frame handling
I0814 11:24:14.029670       6 log.go:172] (0xc00290ae60) (1) Data frame sent
I0814 11:24:14.029689       6 log.go:172] (0xc000e73c30) (0xc00290ae60) Stream removed, broadcasting: 1
I0814 11:24:14.029766       6 log.go:172] (0xc000e73c30) (0xc00290ae60) Stream removed, broadcasting: 1
I0814 11:24:14.029781       6 log.go:172] (0xc000e73c30) (0xc002a87ea0) Stream removed, broadcasting: 3
I0814 11:24:14.029788       6 log.go:172] (0xc000e73c30) (0xc002060820) Stream removed, broadcasting: 5
Aug 14 11:24:14.029: INFO: Exec stderr: ""
Aug 14 11:24:14.029: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8246 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:24:14.029: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:24:14.031542       6 log.go:172] (0xc000e73c30) Go away received
I0814 11:24:14.052189       6 log.go:172] (0xc0025f64d0) (0xc0022f6000) Create stream
I0814 11:24:14.052246       6 log.go:172] (0xc0025f64d0) (0xc0022f6000) Stream added, broadcasting: 1
I0814 11:24:14.054179       6 log.go:172] (0xc0025f64d0) Reply frame received for 1
I0814 11:24:14.054208       6 log.go:172] (0xc0025f64d0) (0xc0018c2b40) Create stream
I0814 11:24:14.054220       6 log.go:172] (0xc0025f64d0) (0xc0018c2b40) Stream added, broadcasting: 3
I0814 11:24:14.055091       6 log.go:172] (0xc0025f64d0) Reply frame received for 3
I0814 11:24:14.055122       6 log.go:172] (0xc0025f64d0) (0xc0018c2be0) Create stream
I0814 11:24:14.055135       6 log.go:172] (0xc0025f64d0) (0xc0018c2be0) Stream added, broadcasting: 5
I0814 11:24:14.055795       6 log.go:172] (0xc0025f64d0) Reply frame received for 5
I0814 11:24:14.129098       6 log.go:172] (0xc0025f64d0) Data frame received for 3
I0814 11:24:14.129130       6 log.go:172] (0xc0018c2b40) (3) Data frame handling
I0814 11:24:14.129149       6 log.go:172] (0xc0025f64d0) Data frame received for 5
I0814 11:24:14.129174       6 log.go:172] (0xc0018c2be0) (5) Data frame handling
I0814 11:24:14.129192       6 log.go:172] (0xc0018c2b40) (3) Data frame sent
I0814 11:24:14.129201       6 log.go:172] (0xc0025f64d0) Data frame received for 3
I0814 11:24:14.129209       6 log.go:172] (0xc0018c2b40) (3) Data frame handling
I0814 11:24:14.129876       6 log.go:172] (0xc0025f64d0) Data frame received for 1
I0814 11:24:14.129893       6 log.go:172] (0xc0022f6000) (1) Data frame handling
I0814 11:24:14.129916       6 log.go:172] (0xc0022f6000) (1) Data frame sent
I0814 11:24:14.129935       6 log.go:172] (0xc0025f64d0) (0xc0022f6000) Stream removed, broadcasting: 1
I0814 11:24:14.129950       6 log.go:172] (0xc0025f64d0) Go away received
I0814 11:24:14.130069       6 log.go:172] (0xc0025f64d0) (0xc0022f6000) Stream removed, broadcasting: 1
I0814 11:24:14.130083       6 log.go:172] (0xc0025f64d0) (0xc0018c2b40) Stream removed, broadcasting: 3
I0814 11:24:14.130088       6 log.go:172] (0xc0025f64d0) (0xc0018c2be0) Stream removed, broadcasting: 5
Aug 14 11:24:14.130: INFO: Exec stderr: ""
Aug 14 11:24:14.130: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8246 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:24:14.130: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:24:14.155481       6 log.go:172] (0xc00291ba20) (0xc0018c30e0) Create stream
I0814 11:24:14.155533       6 log.go:172] (0xc00291ba20) (0xc0018c30e0) Stream added, broadcasting: 1
I0814 11:24:14.157811       6 log.go:172] (0xc00291ba20) Reply frame received for 1
I0814 11:24:14.157850       6 log.go:172] (0xc00291ba20) (0xc0020608c0) Create stream
I0814 11:24:14.157860       6 log.go:172] (0xc00291ba20) (0xc0020608c0) Stream added, broadcasting: 3
I0814 11:24:14.158467       6 log.go:172] (0xc00291ba20) Reply frame received for 3
I0814 11:24:14.158535       6 log.go:172] (0xc00291ba20) (0xc0022f60a0) Create stream
I0814 11:24:14.158555       6 log.go:172] (0xc00291ba20) (0xc0022f60a0) Stream added, broadcasting: 5
I0814 11:24:14.159218       6 log.go:172] (0xc00291ba20) Reply frame received for 5
I0814 11:24:14.220480       6 log.go:172] (0xc00291ba20) Data frame received for 5
I0814 11:24:14.220538       6 log.go:172] (0xc00291ba20) Data frame received for 3
I0814 11:24:14.220584       6 log.go:172] (0xc0020608c0) (3) Data frame handling
I0814 11:24:14.220598       6 log.go:172] (0xc0020608c0) (3) Data frame sent
I0814 11:24:14.220608       6 log.go:172] (0xc00291ba20) Data frame received for 3
I0814 11:24:14.220624       6 log.go:172] (0xc0020608c0) (3) Data frame handling
I0814 11:24:14.220656       6 log.go:172] (0xc0022f60a0) (5) Data frame handling
I0814 11:24:14.221441       6 log.go:172] (0xc00291ba20) Data frame received for 1
I0814 11:24:14.221461       6 log.go:172] (0xc0018c30e0) (1) Data frame handling
I0814 11:24:14.221473       6 log.go:172] (0xc0018c30e0) (1) Data frame sent
I0814 11:24:14.221513       6 log.go:172] (0xc00291ba20) (0xc0018c30e0) Stream removed, broadcasting: 1
I0814 11:24:14.221554       6 log.go:172] (0xc00291ba20) Go away received
I0814 11:24:14.221651       6 log.go:172] (0xc00291ba20) (0xc0018c30e0) Stream removed, broadcasting: 1
I0814 11:24:14.221673       6 log.go:172] (0xc00291ba20) (0xc0020608c0) Stream removed, broadcasting: 3
I0814 11:24:14.221682       6 log.go:172] (0xc00291ba20) (0xc0022f60a0) Stream removed, broadcasting: 5
Aug 14 11:24:14.221: INFO: Exec stderr: ""
Aug 14 11:24:14.221: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8246 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:24:14.221: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:24:14.243255       6 log.go:172] (0xc0025f76b0) (0xc0022f63c0) Create stream
I0814 11:24:14.243271       6 log.go:172] (0xc0025f76b0) (0xc0022f63c0) Stream added, broadcasting: 1
I0814 11:24:14.247973       6 log.go:172] (0xc0025f76b0) Reply frame received for 1
I0814 11:24:14.248031       6 log.go:172] (0xc0025f76b0) (0xc002c79ae0) Create stream
I0814 11:24:14.248047       6 log.go:172] (0xc0025f76b0) (0xc002c79ae0) Stream added, broadcasting: 3
I0814 11:24:14.248940       6 log.go:172] (0xc0025f76b0) Reply frame received for 3
I0814 11:24:14.248986       6 log.go:172] (0xc0025f76b0) (0xc002060a00) Create stream
I0814 11:24:14.249004       6 log.go:172] (0xc0025f76b0) (0xc002060a00) Stream added, broadcasting: 5
I0814 11:24:14.249755       6 log.go:172] (0xc0025f76b0) Reply frame received for 5
I0814 11:24:14.324415       6 log.go:172] (0xc0025f76b0) Data frame received for 5
I0814 11:24:14.324449       6 log.go:172] (0xc002060a00) (5) Data frame handling
I0814 11:24:14.324471       6 log.go:172] (0xc0025f76b0) Data frame received for 3
I0814 11:24:14.324480       6 log.go:172] (0xc002c79ae0) (3) Data frame handling
I0814 11:24:14.324489       6 log.go:172] (0xc002c79ae0) (3) Data frame sent
I0814 11:24:14.324497       6 log.go:172] (0xc0025f76b0) Data frame received for 3
I0814 11:24:14.324505       6 log.go:172] (0xc002c79ae0) (3) Data frame handling
I0814 11:24:14.325305       6 log.go:172] (0xc0025f76b0) Data frame received for 1
I0814 11:24:14.325321       6 log.go:172] (0xc0022f63c0) (1) Data frame handling
I0814 11:24:14.325357       6 log.go:172] (0xc0022f63c0) (1) Data frame sent
I0814 11:24:14.325376       6 log.go:172] (0xc0025f76b0) (0xc0022f63c0) Stream removed, broadcasting: 1
I0814 11:24:14.325394       6 log.go:172] (0xc0025f76b0) Go away received
I0814 11:24:14.325521       6 log.go:172] (0xc0025f76b0) (0xc0022f63c0) Stream removed, broadcasting: 1
I0814 11:24:14.325553       6 log.go:172] (0xc0025f76b0) (0xc002c79ae0) Stream removed, broadcasting: 3
I0814 11:24:14.325568       6 log.go:172] (0xc0025f76b0) (0xc002060a00) Stream removed, broadcasting: 5
Aug 14 11:24:14.325: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:24:14.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8246" for this suite.
Aug 14 11:25:12.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:25:12.524: INFO: namespace e2e-kubelet-etc-hosts-8246 deletion completed in 58.195098848s

• [SLOW TEST:86.276 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:25:12.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Aug 14 11:25:13.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6032'
Aug 14 11:25:36.186: INFO: stderr: ""
Aug 14 11:25:36.186: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Aug 14 11:25:37.617: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:37.617: INFO: Found 0 / 1
Aug 14 11:25:38.197: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:38.197: INFO: Found 0 / 1
Aug 14 11:25:39.377: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:39.377: INFO: Found 0 / 1
Aug 14 11:25:40.562: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:40.563: INFO: Found 0 / 1
Aug 14 11:25:41.940: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:41.940: INFO: Found 0 / 1
Aug 14 11:25:42.461: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:42.461: INFO: Found 0 / 1
Aug 14 11:25:43.402: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:43.402: INFO: Found 0 / 1
Aug 14 11:25:44.234: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:44.234: INFO: Found 0 / 1
Aug 14 11:25:45.503: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:45.503: INFO: Found 0 / 1
Aug 14 11:25:46.192: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:46.192: INFO: Found 0 / 1
Aug 14 11:25:47.207: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:47.207: INFO: Found 0 / 1
Aug 14 11:25:48.389: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:48.389: INFO: Found 0 / 1
Aug 14 11:25:49.193: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:49.193: INFO: Found 1 / 1
Aug 14 11:25:49.193: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 14 11:25:49.196: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 11:25:49.196: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Aug 14 11:25:49.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bvxc7 redis-master --namespace=kubectl-6032'
Aug 14 11:25:49.300: INFO: stderr: ""
Aug 14 11:25:49.300: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 14 Aug 11:25:46.089 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 Aug 11:25:46.089 # Server started, Redis version 3.2.12\n1:M 14 Aug 11:25:46.089 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 Aug 11:25:46.089 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Aug 14 11:25:49.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bvxc7 redis-master --namespace=kubectl-6032 --tail=1'
Aug 14 11:25:49.419: INFO: stderr: ""
Aug 14 11:25:49.419: INFO: stdout: "1:M 14 Aug 11:25:46.089 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Aug 14 11:25:49.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bvxc7 redis-master --namespace=kubectl-6032 --limit-bytes=1'
Aug 14 11:25:49.541: INFO: stderr: ""
Aug 14 11:25:49.541: INFO: stdout: " "
STEP: exposing timestamps
Aug 14 11:25:49.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bvxc7 redis-master --namespace=kubectl-6032 --tail=1 --timestamps'
Aug 14 11:25:50.009: INFO: stderr: ""
Aug 14 11:25:50.009: INFO: stdout: "2020-08-14T11:25:46.281871877Z 1:M 14 Aug 11:25:46.089 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Aug 14 11:25:52.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bvxc7 redis-master --namespace=kubectl-6032 --since=1s'
Aug 14 11:25:52.611: INFO: stderr: ""
Aug 14 11:25:52.611: INFO: stdout: ""
Aug 14 11:25:52.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bvxc7 redis-master --namespace=kubectl-6032 --since=24h'
Aug 14 11:25:52.999: INFO: stderr: ""
Aug 14 11:25:52.999: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 14 Aug 11:25:46.089 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 Aug 11:25:46.089 # Server started, Redis version 3.2.12\n1:M 14 Aug 11:25:46.089 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 Aug 11:25:46.089 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Aug 14 11:25:52.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6032'
Aug 14 11:25:53.176: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 14 11:25:53.176: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Aug 14 11:25:53.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-6032'
Aug 14 11:25:53.307: INFO: stderr: "No resources found.\n"
Aug 14 11:25:53.308: INFO: stdout: ""
Aug 14 11:25:53.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-6032 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 14 11:25:53.563: INFO: stderr: ""
Aug 14 11:25:53.564: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:25:53.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6032" for this suite.
Aug 14 11:26:05.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:26:05.689: INFO: namespace kubectl-6032 deletion completed in 12.121994435s

• [SLOW TEST:53.164 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:26:05.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 14 11:26:06.550: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:06.629: INFO: Number of nodes with available pods: 0
Aug 14 11:26:06.629: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:26:08.080: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:08.148: INFO: Number of nodes with available pods: 0
Aug 14 11:26:08.148: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:26:08.634: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:08.638: INFO: Number of nodes with available pods: 0
Aug 14 11:26:08.638: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:26:09.708: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:09.711: INFO: Number of nodes with available pods: 0
Aug 14 11:26:09.711: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:26:10.634: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:10.638: INFO: Number of nodes with available pods: 0
Aug 14 11:26:10.638: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:26:11.791: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:12.534: INFO: Number of nodes with available pods: 0
Aug 14 11:26:12.534: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:26:12.633: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:12.767: INFO: Number of nodes with available pods: 0
Aug 14 11:26:12.767: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:26:13.690: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:13.693: INFO: Number of nodes with available pods: 0
Aug 14 11:26:13.693: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:26:14.800: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:15.478: INFO: Number of nodes with available pods: 0
Aug 14 11:26:15.478: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:26:16.398: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:16.408: INFO: Number of nodes with available pods: 1
Aug 14 11:26:16.408: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 14 11:26:17.126: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:17.474: INFO: Number of nodes with available pods: 2
Aug 14 11:26:17.474: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 14 11:26:18.675: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:26:18.679: INFO: Number of nodes with available pods: 2
Aug 14 11:26:18.679: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3504, will wait for the garbage collector to delete the pods
Aug 14 11:26:20.654: INFO: Deleting DaemonSet.extensions daemon-set took: 4.464905ms
Aug 14 11:26:21.254: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.264678ms
Aug 14 11:26:25.957: INFO: Number of nodes with available pods: 0
Aug 14 11:26:25.957: INFO: Number of running nodes: 0, number of available pods: 0
Aug 14 11:26:25.959: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3504/daemonsets","resourceVersion":"4878558"},"items":null}

Aug 14 11:26:25.962: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3504/pods","resourceVersion":"4878558"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:26:26.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3504" for this suite.
Aug 14 11:26:34.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:26:34.116: INFO: namespace daemonsets-3504 deletion completed in 8.090169945s

• [SLOW TEST:28.426 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:26:34.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 11:26:34.227: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 11:26:40.997: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92b9b420-7bec-4fbe-82f8-8bf85ba4f95f" in namespace "downward-api-9255" to be "success or failure"
Aug 14 11:26:41.057: INFO: Pod "downwardapi-volume-92b9b420-7bec-4fbe-82f8-8bf85ba4f95f": Phase="Pending", Reason="", readiness=false. Elapsed: 59.316024ms
Aug 14 11:26:43.810: INFO: Pod "downwardapi-volume-92b9b420-7bec-4fbe-82f8-8bf85ba4f95f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812330381s
Aug 14 11:26:45.887: INFO: Pod "downwardapi-volume-92b9b420-7bec-4fbe-82f8-8bf85ba4f95f": Phase="Running", Reason="", readiness=true. Elapsed: 4.889739728s
Aug 14 11:26:47.905: INFO: Pod "downwardapi-volume-92b9b420-7bec-4fbe-82f8-8bf85ba4f95f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.907812201s
STEP: Saw pod success
Aug 14 11:26:47.905: INFO: Pod "downwardapi-volume-92b9b420-7bec-4fbe-82f8-8bf85ba4f95f" satisfied condition "success or failure"
Aug 14 11:26:47.928: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-92b9b420-7bec-4fbe-82f8-8bf85ba4f95f container client-container: 
STEP: delete the pod
Aug 14 11:26:49.151: INFO: Waiting for pod downwardapi-volume-92b9b420-7bec-4fbe-82f8-8bf85ba4f95f to disappear
Aug 14 11:26:49.154: INFO: Pod downwardapi-volume-92b9b420-7bec-4fbe-82f8-8bf85ba4f95f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:26:49.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9255" for this suite.
Aug 14 11:26:57.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:26:57.431: INFO: namespace downward-api-9255 deletion completed in 8.274168397s

• [SLOW TEST:16.760 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:26:57.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-d87a4239-7573-41fb-a88f-5b6c47ea0481
STEP: Creating a pod to test consume secrets
Aug 14 11:26:58.735: INFO: Waiting up to 5m0s for pod "pod-secrets-60583076-4bf9-47ed-a415-5dabee3adc31" in namespace "secrets-7053" to be "success or failure"
Aug 14 11:26:59.035: INFO: Pod "pod-secrets-60583076-4bf9-47ed-a415-5dabee3adc31": Phase="Pending", Reason="", readiness=false. Elapsed: 299.982835ms
Aug 14 11:27:01.038: INFO: Pod "pod-secrets-60583076-4bf9-47ed-a415-5dabee3adc31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303808154s
Aug 14 11:27:03.273: INFO: Pod "pod-secrets-60583076-4bf9-47ed-a415-5dabee3adc31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538636722s
Aug 14 11:27:05.606: INFO: Pod "pod-secrets-60583076-4bf9-47ed-a415-5dabee3adc31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.87140856s
Aug 14 11:27:07.678: INFO: Pod "pod-secrets-60583076-4bf9-47ed-a415-5dabee3adc31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.943104764s
STEP: Saw pod success
Aug 14 11:27:07.678: INFO: Pod "pod-secrets-60583076-4bf9-47ed-a415-5dabee3adc31" satisfied condition "success or failure"
Aug 14 11:27:07.681: INFO: Trying to get logs from node iruya-worker pod pod-secrets-60583076-4bf9-47ed-a415-5dabee3adc31 container secret-volume-test: 
STEP: delete the pod
Aug 14 11:27:08.266: INFO: Waiting for pod pod-secrets-60583076-4bf9-47ed-a415-5dabee3adc31 to disappear
Aug 14 11:27:08.662: INFO: Pod pod-secrets-60583076-4bf9-47ed-a415-5dabee3adc31 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:27:08.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7053" for this suite.
Aug 14 11:27:14.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:27:14.980: INFO: namespace secrets-7053 deletion completed in 6.314013355s

• [SLOW TEST:17.548 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:27:14.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:27:21.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5379" for this suite.
Aug 14 11:28:14.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:28:14.222: INFO: namespace kubelet-test-5379 deletion completed in 53.083774871s

• [SLOW TEST:59.241 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:28:14.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 11:28:16.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afd40fd6-734b-43ba-9f9d-1a2a9dca68d3" in namespace "projected-4930" to be "success or failure"
Aug 14 11:28:16.126: INFO: Pod "downwardapi-volume-afd40fd6-734b-43ba-9f9d-1a2a9dca68d3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.360571ms
Aug 14 11:28:18.130: INFO: Pod "downwardapi-volume-afd40fd6-734b-43ba-9f9d-1a2a9dca68d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007593595s
Aug 14 11:28:20.133: INFO: Pod "downwardapi-volume-afd40fd6-734b-43ba-9f9d-1a2a9dca68d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01094317s
Aug 14 11:28:22.137: INFO: Pod "downwardapi-volume-afd40fd6-734b-43ba-9f9d-1a2a9dca68d3": Phase="Running", Reason="", readiness=true. Elapsed: 6.014941575s
Aug 14 11:28:24.141: INFO: Pod "downwardapi-volume-afd40fd6-734b-43ba-9f9d-1a2a9dca68d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018985222s
STEP: Saw pod success
Aug 14 11:28:24.141: INFO: Pod "downwardapi-volume-afd40fd6-734b-43ba-9f9d-1a2a9dca68d3" satisfied condition "success or failure"
Aug 14 11:28:24.145: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-afd40fd6-734b-43ba-9f9d-1a2a9dca68d3 container client-container: 
STEP: delete the pod
Aug 14 11:28:24.226: INFO: Waiting for pod downwardapi-volume-afd40fd6-734b-43ba-9f9d-1a2a9dca68d3 to disappear
Aug 14 11:28:24.457: INFO: Pod downwardapi-volume-afd40fd6-734b-43ba-9f9d-1a2a9dca68d3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:28:24.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4930" for this suite.
Aug 14 11:28:32.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:28:32.784: INFO: namespace projected-4930 deletion completed in 8.322117601s

• [SLOW TEST:18.562 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:28:32.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-5574
STEP: Creating a pod to test atomic-volume-subpath
Aug 14 11:28:33.760: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5574" in namespace "subpath-6170" to be "success or failure"
Aug 14 11:28:34.009: INFO: Pod "pod-subpath-test-projected-5574": Phase="Pending", Reason="", readiness=false. Elapsed: 248.837378ms
Aug 14 11:28:36.016: INFO: Pod "pod-subpath-test-projected-5574": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255055133s
Aug 14 11:28:38.464: INFO: Pod "pod-subpath-test-projected-5574": Phase="Pending", Reason="", readiness=false. Elapsed: 4.703020651s
Aug 14 11:28:40.468: INFO: Pod "pod-subpath-test-projected-5574": Phase="Pending", Reason="", readiness=false. Elapsed: 6.707439185s
Aug 14 11:28:43.218: INFO: Pod "pod-subpath-test-projected-5574": Phase="Pending", Reason="", readiness=false. Elapsed: 9.45731674s
Aug 14 11:28:45.334: INFO: Pod "pod-subpath-test-projected-5574": Phase="Pending", Reason="", readiness=false. Elapsed: 11.573558346s
Aug 14 11:28:48.718: INFO: Pod "pod-subpath-test-projected-5574": Phase="Running", Reason="", readiness=true. Elapsed: 14.957803006s
Aug 14 11:28:51.490: INFO: Pod "pod-subpath-test-projected-5574": Phase="Running", Reason="", readiness=true. Elapsed: 17.72980635s
Aug 14 11:28:53.984: INFO: Pod "pod-subpath-test-projected-5574": Phase="Running", Reason="", readiness=true. Elapsed: 20.223896373s
Aug 14 11:28:56.861: INFO: Pod "pod-subpath-test-projected-5574": Phase="Running", Reason="", readiness=true. Elapsed: 23.100670487s
Aug 14 11:28:59.096: INFO: Pod "pod-subpath-test-projected-5574": Phase="Running", Reason="", readiness=true. Elapsed: 25.335862633s
Aug 14 11:29:02.159: INFO: Pod "pod-subpath-test-projected-5574": Phase="Running", Reason="", readiness=true. Elapsed: 28.398854908s
Aug 14 11:29:04.916: INFO: Pod "pod-subpath-test-projected-5574": Phase="Running", Reason="", readiness=true. Elapsed: 31.155100261s
Aug 14 11:29:07.477: INFO: Pod "pod-subpath-test-projected-5574": Phase="Running", Reason="", readiness=true. Elapsed: 33.716171345s
Aug 14 11:29:09.916: INFO: Pod "pod-subpath-test-projected-5574": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.155553326s
STEP: Saw pod success
Aug 14 11:29:09.916: INFO: Pod "pod-subpath-test-projected-5574" satisfied condition "success or failure"
Aug 14 11:29:09.941: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-5574 container test-container-subpath-projected-5574: 
STEP: delete the pod
Aug 14 11:29:11.299: INFO: Waiting for pod pod-subpath-test-projected-5574 to disappear
Aug 14 11:29:12.428: INFO: Pod pod-subpath-test-projected-5574 no longer exists
STEP: Deleting pod pod-subpath-test-projected-5574
Aug 14 11:29:12.428: INFO: Deleting pod "pod-subpath-test-projected-5574" in namespace "subpath-6170"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:29:12.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6170" for this suite.
Aug 14 11:29:26.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:29:27.160: INFO: namespace subpath-6170 deletion completed in 14.695402632s

• [SLOW TEST:54.376 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:29:27.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7863
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 14 11:29:27.339: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 14 11:30:28.352: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.111:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7863 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:30:28.352: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:30:28.387514       6 log.go:172] (0xc001f220b0) (0xc0020600a0) Create stream
I0814 11:30:28.387544       6 log.go:172] (0xc001f220b0) (0xc0020600a0) Stream added, broadcasting: 1
I0814 11:30:28.389557       6 log.go:172] (0xc001f220b0) Reply frame received for 1
I0814 11:30:28.389590       6 log.go:172] (0xc001f220b0) (0xc003184000) Create stream
I0814 11:30:28.389601       6 log.go:172] (0xc001f220b0) (0xc003184000) Stream added, broadcasting: 3
I0814 11:30:28.390333       6 log.go:172] (0xc001f220b0) Reply frame received for 3
I0814 11:30:28.390377       6 log.go:172] (0xc001f220b0) (0xc0031840a0) Create stream
I0814 11:30:28.390389       6 log.go:172] (0xc001f220b0) (0xc0031840a0) Stream added, broadcasting: 5
I0814 11:30:28.391102       6 log.go:172] (0xc001f220b0) Reply frame received for 5
I0814 11:30:28.484268       6 log.go:172] (0xc001f220b0) Data frame received for 3
I0814 11:30:28.484309       6 log.go:172] (0xc003184000) (3) Data frame handling
I0814 11:30:28.484335       6 log.go:172] (0xc003184000) (3) Data frame sent
I0814 11:30:28.484390       6 log.go:172] (0xc001f220b0) Data frame received for 3
I0814 11:30:28.484431       6 log.go:172] (0xc003184000) (3) Data frame handling
I0814 11:30:28.484506       6 log.go:172] (0xc001f220b0) Data frame received for 5
I0814 11:30:28.484524       6 log.go:172] (0xc0031840a0) (5) Data frame handling
I0814 11:30:28.486059       6 log.go:172] (0xc001f220b0) Data frame received for 1
I0814 11:30:28.486077       6 log.go:172] (0xc0020600a0) (1) Data frame handling
I0814 11:30:28.486095       6 log.go:172] (0xc0020600a0) (1) Data frame sent
I0814 11:30:28.486183       6 log.go:172] (0xc001f220b0) (0xc0020600a0) Stream removed, broadcasting: 1
I0814 11:30:28.486253       6 log.go:172] (0xc001f220b0) (0xc0020600a0) Stream removed, broadcasting: 1
I0814 11:30:28.486278       6 log.go:172] (0xc001f220b0) (0xc003184000) Stream removed, broadcasting: 3
I0814 11:30:28.486289       6 log.go:172] (0xc001f220b0) (0xc0031840a0) Stream removed, broadcasting: 5
Aug 14 11:30:28.486: INFO: Found all expected endpoints: [netserver-0]
I0814 11:30:28.486367       6 log.go:172] (0xc001f220b0) Go away received
Aug 14 11:30:29.112: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.48:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7863 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:30:29.112: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:30:29.608446       6 log.go:172] (0xc000f46f20) (0xc002452a00) Create stream
I0814 11:30:29.608475       6 log.go:172] (0xc000f46f20) (0xc002452a00) Stream added, broadcasting: 1
I0814 11:30:29.610811       6 log.go:172] (0xc000f46f20) Reply frame received for 1
I0814 11:30:29.610844       6 log.go:172] (0xc000f46f20) (0xc002452aa0) Create stream
I0814 11:30:29.610855       6 log.go:172] (0xc000f46f20) (0xc002452aa0) Stream added, broadcasting: 3
I0814 11:30:29.611472       6 log.go:172] (0xc000f46f20) Reply frame received for 3
I0814 11:30:29.611493       6 log.go:172] (0xc000f46f20) (0xc002452b40) Create stream
I0814 11:30:29.611501       6 log.go:172] (0xc000f46f20) (0xc002452b40) Stream added, broadcasting: 5
I0814 11:30:29.612190       6 log.go:172] (0xc000f46f20) Reply frame received for 5
I0814 11:30:29.685237       6 log.go:172] (0xc000f46f20) Data frame received for 3
I0814 11:30:29.685257       6 log.go:172] (0xc002452aa0) (3) Data frame handling
I0814 11:30:29.685269       6 log.go:172] (0xc002452aa0) (3) Data frame sent
I0814 11:30:29.685273       6 log.go:172] (0xc000f46f20) Data frame received for 3
I0814 11:30:29.685277       6 log.go:172] (0xc002452aa0) (3) Data frame handling
I0814 11:30:29.685860       6 log.go:172] (0xc000f46f20) Data frame received for 5
I0814 11:30:29.685886       6 log.go:172] (0xc002452b40) (5) Data frame handling
I0814 11:30:29.687382       6 log.go:172] (0xc000f46f20) Data frame received for 1
I0814 11:30:29.687440       6 log.go:172] (0xc002452a00) (1) Data frame handling
I0814 11:30:29.687473       6 log.go:172] (0xc002452a00) (1) Data frame sent
I0814 11:30:29.687495       6 log.go:172] (0xc000f46f20) (0xc002452a00) Stream removed, broadcasting: 1
I0814 11:30:29.687521       6 log.go:172] (0xc000f46f20) Go away received
I0814 11:30:29.687607       6 log.go:172] (0xc000f46f20) (0xc002452a00) Stream removed, broadcasting: 1
I0814 11:30:29.687628       6 log.go:172] (0xc000f46f20) (0xc002452aa0) Stream removed, broadcasting: 3
I0814 11:30:29.687639       6 log.go:172] (0xc000f46f20) (0xc002452b40) Stream removed, broadcasting: 5
Aug 14 11:30:29.687: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:30:29.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7863" for this suite.
Aug 14 11:31:04.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:31:06.097: INFO: namespace pod-network-test-7863 deletion completed in 36.349821359s

• [SLOW TEST:98.937 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:31:06.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 14 11:31:08.212: INFO: Waiting up to 5m0s for pod "downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22" in namespace "downward-api-3863" to be "success or failure"
Aug 14 11:31:09.033: INFO: Pod "downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22": Phase="Pending", Reason="", readiness=false. Elapsed: 821.517229ms
Aug 14 11:31:11.161: INFO: Pod "downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.94929256s
Aug 14 11:31:13.311: INFO: Pod "downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22": Phase="Pending", Reason="", readiness=false. Elapsed: 5.099198355s
Aug 14 11:31:15.315: INFO: Pod "downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22": Phase="Pending", Reason="", readiness=false. Elapsed: 7.1029083s
Aug 14 11:31:17.397: INFO: Pod "downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22": Phase="Pending", Reason="", readiness=false. Elapsed: 9.184849083s
Aug 14 11:31:19.399: INFO: Pod "downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22": Phase="Pending", Reason="", readiness=false. Elapsed: 11.187218608s
Aug 14 11:31:21.795: INFO: Pod "downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.583107668s
STEP: Saw pod success
Aug 14 11:31:21.795: INFO: Pod "downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22" satisfied condition "success or failure"
Aug 14 11:31:22.111: INFO: Trying to get logs from node iruya-worker2 pod downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22 container dapi-container: 
STEP: delete the pod
Aug 14 11:31:23.458: INFO: Waiting for pod downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22 to disappear
Aug 14 11:31:23.547: INFO: Pod downward-api-7b5723d9-01ca-42d7-9c33-3593967a3d22 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:31:23.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3863" for this suite.
Aug 14 11:31:34.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:31:35.521: INFO: namespace downward-api-3863 deletion completed in 11.471474296s

• [SLOW TEST:29.423 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:31:35.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-40ef051d-ddcf-4205-b467-15e33625d18f
STEP: Creating a pod to test consume configMaps
Aug 14 11:31:36.486: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16" in namespace "configmap-11" to be "success or failure"
Aug 14 11:31:36.549: INFO: Pod "pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16": Phase="Pending", Reason="", readiness=false. Elapsed: 62.944199ms
Aug 14 11:31:38.552: INFO: Pod "pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066565745s
Aug 14 11:31:40.555: INFO: Pod "pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069534496s
Aug 14 11:31:42.602: INFO: Pod "pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116089615s
Aug 14 11:31:44.605: INFO: Pod "pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118909485s
Aug 14 11:31:46.790: INFO: Pod "pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16": Phase="Pending", Reason="", readiness=false. Elapsed: 10.304097149s
Aug 14 11:31:49.084: INFO: Pod "pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16": Phase="Running", Reason="", readiness=true. Elapsed: 12.597722334s
Aug 14 11:31:51.087: INFO: Pod "pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.600638961s
STEP: Saw pod success
Aug 14 11:31:51.087: INFO: Pod "pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16" satisfied condition "success or failure"
Aug 14 11:31:51.089: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16 container configmap-volume-test: 
STEP: delete the pod
Aug 14 11:31:51.803: INFO: Waiting for pod pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16 to disappear
Aug 14 11:31:52.604: INFO: Pod pod-configmaps-b3ef26a1-1523-4939-ab74-27222f767e16 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:31:52.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-11" for this suite.
Aug 14 11:31:59.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:32:01.287: INFO: namespace configmap-11 deletion completed in 8.170929458s

• [SLOW TEST:25.765 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:32:01.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-8d34ec0e-8e7d-436e-b494-da5a9fdda618
STEP: Creating a pod to test consume configMaps
Aug 14 11:32:02.909: INFO: Waiting up to 5m0s for pod "pod-configmaps-940e0da3-e702-421a-ab26-e66bfa2fde78" in namespace "configmap-7369" to be "success or failure"
Aug 14 11:32:02.940: INFO: Pod "pod-configmaps-940e0da3-e702-421a-ab26-e66bfa2fde78": Phase="Pending", Reason="", readiness=false. Elapsed: 31.504155ms
Aug 14 11:32:04.995: INFO: Pod "pod-configmaps-940e0da3-e702-421a-ab26-e66bfa2fde78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086201744s
Aug 14 11:32:07.197: INFO: Pod "pod-configmaps-940e0da3-e702-421a-ab26-e66bfa2fde78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288592913s
Aug 14 11:32:09.201: INFO: Pod "pod-configmaps-940e0da3-e702-421a-ab26-e66bfa2fde78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.292762115s
STEP: Saw pod success
Aug 14 11:32:09.202: INFO: Pod "pod-configmaps-940e0da3-e702-421a-ab26-e66bfa2fde78" satisfied condition "success or failure"
Aug 14 11:32:09.204: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-940e0da3-e702-421a-ab26-e66bfa2fde78 container configmap-volume-test: 
STEP: delete the pod
Aug 14 11:32:09.762: INFO: Waiting for pod pod-configmaps-940e0da3-e702-421a-ab26-e66bfa2fde78 to disappear
Aug 14 11:32:09.846: INFO: Pod pod-configmaps-940e0da3-e702-421a-ab26-e66bfa2fde78 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:32:09.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7369" for this suite.
Aug 14 11:32:18.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:32:18.949: INFO: namespace configmap-7369 deletion completed in 9.099429476s

• [SLOW TEST:17.662 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:32:18.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 14 11:32:20.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1098'
Aug 14 11:32:21.107: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 14 11:32:21.107: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Aug 14 11:32:25.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1098'
Aug 14 11:32:26.367: INFO: stderr: ""
Aug 14 11:32:26.367: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:32:26.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1098" for this suite.
Aug 14 11:32:36.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:32:36.660: INFO: namespace kubectl-1098 deletion completed in 9.683971813s

• [SLOW TEST:17.710 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:32:36.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Aug 14 11:32:38.884: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9373" to be "success or failure"
Aug 14 11:32:39.427: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 543.013374ms
Aug 14 11:32:41.855: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.971285783s
Aug 14 11:32:44.412: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.528289279s
Aug 14 11:32:46.432: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.547973261s
Aug 14 11:32:48.456: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.572451793s
Aug 14 11:32:50.492: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.607954255s
Aug 14 11:32:52.893: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 14.009466708s
Aug 14 11:32:54.897: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.013019333s
STEP: Saw pod success
Aug 14 11:32:54.897: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 14 11:32:54.899: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 14 11:32:54.988: INFO: Waiting for pod pod-host-path-test to disappear
Aug 14 11:32:55.078: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:32:55.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9373" for this suite.
Aug 14 11:33:05.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:33:05.305: INFO: namespace hostpath-9373 deletion completed in 10.222505248s

• [SLOW TEST:28.645 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:33:05.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 11:33:05.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56660e79-56fe-4504-adce-3a0976419571" in namespace "projected-7182" to be "success or failure"
Aug 14 11:33:05.856: INFO: Pod "downwardapi-volume-56660e79-56fe-4504-adce-3a0976419571": Phase="Pending", Reason="", readiness=false. Elapsed: 31.285802ms
Aug 14 11:33:08.013: INFO: Pod "downwardapi-volume-56660e79-56fe-4504-adce-3a0976419571": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187592524s
Aug 14 11:33:10.016: INFO: Pod "downwardapi-volume-56660e79-56fe-4504-adce-3a0976419571": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190361777s
Aug 14 11:33:12.134: INFO: Pod "downwardapi-volume-56660e79-56fe-4504-adce-3a0976419571": Phase="Running", Reason="", readiness=true. Elapsed: 6.309140276s
Aug 14 11:33:14.282: INFO: Pod "downwardapi-volume-56660e79-56fe-4504-adce-3a0976419571": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.457062396s
STEP: Saw pod success
Aug 14 11:33:14.282: INFO: Pod "downwardapi-volume-56660e79-56fe-4504-adce-3a0976419571" satisfied condition "success or failure"
Aug 14 11:33:14.285: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-56660e79-56fe-4504-adce-3a0976419571 container client-container: 
STEP: delete the pod
Aug 14 11:33:14.327: INFO: Waiting for pod downwardapi-volume-56660e79-56fe-4504-adce-3a0976419571 to disappear
Aug 14 11:33:14.331: INFO: Pod downwardapi-volume-56660e79-56fe-4504-adce-3a0976419571 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:33:14.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7182" for this suite.
Aug 14 11:33:22.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:33:23.002: INFO: namespace projected-7182 deletion completed in 8.666508371s

• [SLOW TEST:17.696 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:33:23.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5515.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5515.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 14 11:33:45.430: INFO: DNS probes using dns-5515/dns-test-386abf95-c186-43da-b4c1-3af97cc4f4d9 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:33:45.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5515" for this suite.
Aug 14 11:33:58.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:33:58.400: INFO: namespace dns-5515 deletion completed in 12.767611182s

• [SLOW TEST:35.397 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:33:58.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-a283654d-5f16-4410-8bcb-e81e3846393b
STEP: Creating a pod to test consume configMaps
Aug 14 11:33:58.897: INFO: Waiting up to 5m0s for pod "pod-configmaps-587c38af-57b6-441d-b86f-ede9fa4ba584" in namespace "configmap-5592" to be "success or failure"
Aug 14 11:33:58.922: INFO: Pod "pod-configmaps-587c38af-57b6-441d-b86f-ede9fa4ba584": Phase="Pending", Reason="", readiness=false. Elapsed: 25.715686ms
Aug 14 11:34:00.927: INFO: Pod "pod-configmaps-587c38af-57b6-441d-b86f-ede9fa4ba584": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029828141s
Aug 14 11:34:02.948: INFO: Pod "pod-configmaps-587c38af-57b6-441d-b86f-ede9fa4ba584": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051197004s
Aug 14 11:34:04.951: INFO: Pod "pod-configmaps-587c38af-57b6-441d-b86f-ede9fa4ba584": Phase="Running", Reason="", readiness=true. Elapsed: 6.05461993s
Aug 14 11:34:06.955: INFO: Pod "pod-configmaps-587c38af-57b6-441d-b86f-ede9fa4ba584": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058134701s
STEP: Saw pod success
Aug 14 11:34:06.955: INFO: Pod "pod-configmaps-587c38af-57b6-441d-b86f-ede9fa4ba584" satisfied condition "success or failure"
Aug 14 11:34:06.957: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-587c38af-57b6-441d-b86f-ede9fa4ba584 container configmap-volume-test: 
STEP: delete the pod
Aug 14 11:34:07.154: INFO: Waiting for pod pod-configmaps-587c38af-57b6-441d-b86f-ede9fa4ba584 to disappear
Aug 14 11:34:07.167: INFO: Pod pod-configmaps-587c38af-57b6-441d-b86f-ede9fa4ba584 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:34:07.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5592" for this suite.
Aug 14 11:34:13.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:34:13.316: INFO: namespace configmap-5592 deletion completed in 6.146271093s

• [SLOW TEST:14.916 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:34:13.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 14 11:34:22.451: INFO: Successfully updated pod "pod-update-064ec86c-7c46-4319-97fa-0c9a1d8347a2"
STEP: verifying the updated pod is in kubernetes
Aug 14 11:34:22.522: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:34:22.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9544" for this suite.
Aug 14 11:34:47.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:34:47.570: INFO: namespace pods-9544 deletion completed in 25.043017965s

• [SLOW TEST:34.253 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:34:47.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-b28fb41f-cc24-4f71-95d7-dc6aa440f90c
STEP: Creating a pod to test consume configMaps
Aug 14 11:34:47.685: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9fad31b-296d-4e47-a087-0a428430178e" in namespace "configmap-1991" to be "success or failure"
Aug 14 11:34:47.702: INFO: Pod "pod-configmaps-e9fad31b-296d-4e47-a087-0a428430178e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.033664ms
Aug 14 11:34:50.068: INFO: Pod "pod-configmaps-e9fad31b-296d-4e47-a087-0a428430178e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382749979s
Aug 14 11:34:52.266: INFO: Pod "pod-configmaps-e9fad31b-296d-4e47-a087-0a428430178e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.580711882s
Aug 14 11:34:54.331: INFO: Pod "pod-configmaps-e9fad31b-296d-4e47-a087-0a428430178e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.64605302s
Aug 14 11:34:56.335: INFO: Pod "pod-configmaps-e9fad31b-296d-4e47-a087-0a428430178e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.64972508s
STEP: Saw pod success
Aug 14 11:34:56.335: INFO: Pod "pod-configmaps-e9fad31b-296d-4e47-a087-0a428430178e" satisfied condition "success or failure"
Aug 14 11:34:56.338: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-e9fad31b-296d-4e47-a087-0a428430178e container configmap-volume-test: 
STEP: delete the pod
Aug 14 11:34:56.530: INFO: Waiting for pod pod-configmaps-e9fad31b-296d-4e47-a087-0a428430178e to disappear
Aug 14 11:34:56.889: INFO: Pod pod-configmaps-e9fad31b-296d-4e47-a087-0a428430178e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:34:56.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1991" for this suite.
Aug 14 11:35:07.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:35:08.800: INFO: namespace configmap-1991 deletion completed in 11.860617291s

• [SLOW TEST:21.230 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:35:08.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 14 11:35:24.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:24.618: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:26.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:26.622: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:28.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:28.835: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:30.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:30.622: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:32.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:32.623: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:34.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:34.622: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:36.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:36.623: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:38.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:38.638: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:40.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:40.622: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:42.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:42.621: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:44.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:44.623: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:46.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:46.622: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:48.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:48.827: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:50.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:50.623: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:52.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:52.624: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:54.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:54.622: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 11:35:56.619: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 11:35:56.662: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:35:56.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8139" for this suite.
Aug 14 11:36:26.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:36:27.129: INFO: namespace container-lifecycle-hook-8139 deletion completed in 30.45476791s

• [SLOW TEST:78.327 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:36:27.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 14 11:36:27.191: INFO: Waiting up to 5m0s for pod "pod-9371cc12-7e4d-488f-9716-854e8439e354" in namespace "emptydir-891" to be "success or failure"
Aug 14 11:36:27.213: INFO: Pod "pod-9371cc12-7e4d-488f-9716-854e8439e354": Phase="Pending", Reason="", readiness=false. Elapsed: 22.482054ms
Aug 14 11:36:29.261: INFO: Pod "pod-9371cc12-7e4d-488f-9716-854e8439e354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070235164s
Aug 14 11:36:31.369: INFO: Pod "pod-9371cc12-7e4d-488f-9716-854e8439e354": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177764471s
Aug 14 11:36:33.372: INFO: Pod "pod-9371cc12-7e4d-488f-9716-854e8439e354": Phase="Running", Reason="", readiness=true. Elapsed: 6.180731126s
Aug 14 11:36:35.573: INFO: Pod "pod-9371cc12-7e4d-488f-9716-854e8439e354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.381815836s
STEP: Saw pod success
Aug 14 11:36:35.573: INFO: Pod "pod-9371cc12-7e4d-488f-9716-854e8439e354" satisfied condition "success or failure"
Aug 14 11:36:35.575: INFO: Trying to get logs from node iruya-worker2 pod pod-9371cc12-7e4d-488f-9716-854e8439e354 container test-container: 
STEP: delete the pod
Aug 14 11:36:36.431: INFO: Waiting for pod pod-9371cc12-7e4d-488f-9716-854e8439e354 to disappear
Aug 14 11:36:37.287: INFO: Pod pod-9371cc12-7e4d-488f-9716-854e8439e354 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:36:37.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-891" for this suite.
Aug 14 11:36:47.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:36:47.392: INFO: namespace emptydir-891 deletion completed in 9.764373228s

• [SLOW TEST:20.263 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:36:47.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-e98e1d0d-b2eb-4817-96cd-96e36f33f4c9
STEP: Creating a pod to test consume secrets
Aug 14 11:36:49.711: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65" in namespace "projected-5551" to be "success or failure"
Aug 14 11:36:49.914: INFO: Pod "pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65": Phase="Pending", Reason="", readiness=false. Elapsed: 203.067622ms
Aug 14 11:36:52.807: INFO: Pod "pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65": Phase="Pending", Reason="", readiness=false. Elapsed: 3.096386981s
Aug 14 11:36:54.873: INFO: Pod "pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65": Phase="Pending", Reason="", readiness=false. Elapsed: 5.162338093s
Aug 14 11:36:56.877: INFO: Pod "pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65": Phase="Pending", Reason="", readiness=false. Elapsed: 7.166128024s
Aug 14 11:36:58.950: INFO: Pod "pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65": Phase="Pending", Reason="", readiness=false. Elapsed: 9.239468455s
Aug 14 11:37:01.202: INFO: Pod "pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65": Phase="Running", Reason="", readiness=true. Elapsed: 11.490955635s
Aug 14 11:37:03.785: INFO: Pod "pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.074231369s
STEP: Saw pod success
Aug 14 11:37:03.785: INFO: Pod "pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65" satisfied condition "success or failure"
Aug 14 11:37:04.184: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65 container secret-volume-test: 
STEP: delete the pod
Aug 14 11:37:04.444: INFO: Waiting for pod pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65 to disappear
Aug 14 11:37:04.681: INFO: Pod pod-projected-secrets-fe7ec326-bb31-47a5-b050-2a7befedfe65 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:37:04.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5551" for this suite.
Aug 14 11:37:13.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:37:13.736: INFO: namespace projected-5551 deletion completed in 9.051969574s

• [SLOW TEST:26.343 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:37:13.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:37:22.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8055" for this suite.
Aug 14 11:38:22.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:38:22.475: INFO: namespace kubelet-test-8055 deletion completed in 1m0.186729933s

• [SLOW TEST:68.739 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:38:22.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 11:38:23.026: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:38:24.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7282" for this suite.
Aug 14 11:38:33.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:38:33.315: INFO: namespace custom-resource-definition-7282 deletion completed in 8.829233458s

• [SLOW TEST:10.839 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:38:33.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 11:38:36.767: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959" in namespace "downward-api-6506" to be "success or failure"
Aug 14 11:38:36.770: INFO: Pod "downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959": Phase="Pending", Reason="", readiness=false. Elapsed: 3.464919ms
Aug 14 11:38:38.774: INFO: Pod "downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007155033s
Aug 14 11:38:41.535: INFO: Pod "downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959": Phase="Pending", Reason="", readiness=false. Elapsed: 4.767879316s
Aug 14 11:38:43.539: INFO: Pod "downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959": Phase="Pending", Reason="", readiness=false. Elapsed: 6.772457213s
Aug 14 11:38:46.065: INFO: Pod "downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959": Phase="Pending", Reason="", readiness=false. Elapsed: 9.297762962s
Aug 14 11:38:49.869: INFO: Pod "downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959": Phase="Pending", Reason="", readiness=false. Elapsed: 13.101703419s
Aug 14 11:38:51.873: INFO: Pod "downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959": Phase="Pending", Reason="", readiness=false. Elapsed: 15.106049984s
Aug 14 11:38:54.132: INFO: Pod "downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959": Phase="Pending", Reason="", readiness=false. Elapsed: 17.365351216s
Aug 14 11:38:56.156: INFO: Pod "downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.388632581s
STEP: Saw pod success
Aug 14 11:38:56.156: INFO: Pod "downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959" satisfied condition "success or failure"
Aug 14 11:38:56.158: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959 container client-container: 
STEP: delete the pod
Aug 14 11:38:56.899: INFO: Waiting for pod downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959 to disappear
Aug 14 11:38:57.114: INFO: Pod downwardapi-volume-47ef0478-9389-41bd-83cf-2fce28990959 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:38:57.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6506" for this suite.
Aug 14 11:39:06.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:39:06.744: INFO: namespace downward-api-6506 deletion completed in 9.625805698s

• [SLOW TEST:33.429 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:39:06.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Aug 14 11:39:07.797: INFO: Waiting up to 5m0s for pod "var-expansion-5a43052f-21b9-4447-89e5-b768b10b3aee" in namespace "var-expansion-3771" to be "success or failure"
Aug 14 11:39:08.054: INFO: Pod "var-expansion-5a43052f-21b9-4447-89e5-b768b10b3aee": Phase="Pending", Reason="", readiness=false. Elapsed: 257.308233ms
Aug 14 11:39:10.058: INFO: Pod "var-expansion-5a43052f-21b9-4447-89e5-b768b10b3aee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261095837s
Aug 14 11:39:12.671: INFO: Pod "var-expansion-5a43052f-21b9-4447-89e5-b768b10b3aee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.873962144s
Aug 14 11:39:14.675: INFO: Pod "var-expansion-5a43052f-21b9-4447-89e5-b768b10b3aee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.877603434s
Aug 14 11:39:17.068: INFO: Pod "var-expansion-5a43052f-21b9-4447-89e5-b768b10b3aee": Phase="Running", Reason="", readiness=true. Elapsed: 9.27044807s
Aug 14 11:39:19.071: INFO: Pod "var-expansion-5a43052f-21b9-4447-89e5-b768b10b3aee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.273455596s
STEP: Saw pod success
Aug 14 11:39:19.071: INFO: Pod "var-expansion-5a43052f-21b9-4447-89e5-b768b10b3aee" satisfied condition "success or failure"
Aug 14 11:39:19.101: INFO: Trying to get logs from node iruya-worker pod var-expansion-5a43052f-21b9-4447-89e5-b768b10b3aee container dapi-container: 
STEP: delete the pod
Aug 14 11:39:19.372: INFO: Waiting for pod var-expansion-5a43052f-21b9-4447-89e5-b768b10b3aee to disappear
Aug 14 11:39:19.407: INFO: Pod var-expansion-5a43052f-21b9-4447-89e5-b768b10b3aee no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:39:19.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3771" for this suite.
Aug 14 11:39:27.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:39:27.506: INFO: namespace var-expansion-3771 deletion completed in 8.094897635s

• [SLOW TEST:20.762 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:39:27.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:39:38.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4189" for this suite.
Aug 14 11:39:47.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:39:48.019: INFO: namespace emptydir-wrapper-4189 deletion completed in 9.37109675s

• [SLOW TEST:20.513 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:39:48.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:39:48.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1588" for this suite.
Aug 14 11:39:56.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:39:57.370: INFO: namespace services-1588 deletion completed in 8.758705867s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:9.351 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:39:57.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 11:39:58.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ecf8d19-c92e-4358-b6f5-013f1f22fcbd" in namespace "projected-9211" to be "success or failure"
Aug 14 11:39:58.365: INFO: Pod "downwardapi-volume-8ecf8d19-c92e-4358-b6f5-013f1f22fcbd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.849218ms
Aug 14 11:40:00.408: INFO: Pod "downwardapi-volume-8ecf8d19-c92e-4358-b6f5-013f1f22fcbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050616564s
Aug 14 11:40:02.642: INFO: Pod "downwardapi-volume-8ecf8d19-c92e-4358-b6f5-013f1f22fcbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285295448s
Aug 14 11:40:04.678: INFO: Pod "downwardapi-volume-8ecf8d19-c92e-4358-b6f5-013f1f22fcbd": Phase="Running", Reason="", readiness=true. Elapsed: 6.320778973s
Aug 14 11:40:07.433: INFO: Pod "downwardapi-volume-8ecf8d19-c92e-4358-b6f5-013f1f22fcbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.075549268s
STEP: Saw pod success
Aug 14 11:40:07.433: INFO: Pod "downwardapi-volume-8ecf8d19-c92e-4358-b6f5-013f1f22fcbd" satisfied condition "success or failure"
Aug 14 11:40:07.485: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8ecf8d19-c92e-4358-b6f5-013f1f22fcbd container client-container: 
STEP: delete the pod
Aug 14 11:40:08.483: INFO: Waiting for pod downwardapi-volume-8ecf8d19-c92e-4358-b6f5-013f1f22fcbd to disappear
Aug 14 11:40:08.899: INFO: Pod downwardapi-volume-8ecf8d19-c92e-4358-b6f5-013f1f22fcbd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:40:08.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9211" for this suite.
Aug 14 11:40:16.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:40:17.336: INFO: namespace projected-9211 deletion completed in 8.433596242s

• [SLOW TEST:19.966 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:40:17.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 14 11:40:17.506: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Aug 14 11:40:18.659: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 14 11:40:22.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 14 11:40:24.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 14 11:40:26.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002018, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 14 11:40:30.654: INFO: Waited 2.146861702s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:40:35.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-9675" for this suite.
Aug 14 11:40:45.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:40:45.271: INFO: namespace aggregator-9675 deletion completed in 10.168399533s

• [SLOW TEST:27.935 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:40:45.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-xvqn
STEP: Creating a pod to test atomic-volume-subpath
Aug 14 11:40:45.361: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xvqn" in namespace "subpath-2172" to be "success or failure"
Aug 14 11:40:45.402: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Pending", Reason="", readiness=false. Elapsed: 40.680384ms
Aug 14 11:40:47.427: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065775895s
Aug 14 11:40:49.431: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069872002s
Aug 14 11:40:51.787: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42563063s
Aug 14 11:40:53.865: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.504065487s
Aug 14 11:40:55.869: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Running", Reason="", readiness=true. Elapsed: 10.507660458s
Aug 14 11:40:57.872: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Running", Reason="", readiness=true. Elapsed: 12.510945107s
Aug 14 11:40:59.912: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Running", Reason="", readiness=true. Elapsed: 14.55094052s
Aug 14 11:41:01.916: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Running", Reason="", readiness=true. Elapsed: 16.554899152s
Aug 14 11:41:04.146: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Running", Reason="", readiness=true. Elapsed: 18.784770788s
Aug 14 11:41:06.150: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Running", Reason="", readiness=true. Elapsed: 20.788895831s
Aug 14 11:41:08.493: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Running", Reason="", readiness=true. Elapsed: 23.132169632s
Aug 14 11:41:10.631: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Running", Reason="", readiness=true. Elapsed: 25.269734077s
Aug 14 11:41:12.944: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Running", Reason="", readiness=true. Elapsed: 27.582831334s
Aug 14 11:41:14.949: INFO: Pod "pod-subpath-test-configmap-xvqn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.587522047s
STEP: Saw pod success
Aug 14 11:41:14.949: INFO: Pod "pod-subpath-test-configmap-xvqn" satisfied condition "success or failure"
Aug 14 11:41:14.951: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-xvqn container test-container-subpath-configmap-xvqn: 
STEP: delete the pod
Aug 14 11:41:15.112: INFO: Waiting for pod pod-subpath-test-configmap-xvqn to disappear
Aug 14 11:41:15.146: INFO: Pod pod-subpath-test-configmap-xvqn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xvqn
Aug 14 11:41:15.146: INFO: Deleting pod "pod-subpath-test-configmap-xvqn" in namespace "subpath-2172"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:41:15.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2172" for this suite.
Aug 14 11:41:21.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:41:21.265: INFO: namespace subpath-2172 deletion completed in 6.109185246s

• [SLOW TEST:35.993 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:41:21.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-b3f4b527-bb0e-4f0e-822e-7623af9c322e
STEP: Creating a pod to test consume configMaps
Aug 14 11:41:21.609: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9738d559-8b89-4071-a3a7-968874355385" in namespace "projected-6507" to be "success or failure"
Aug 14 11:41:21.613: INFO: Pod "pod-projected-configmaps-9738d559-8b89-4071-a3a7-968874355385": Phase="Pending", Reason="", readiness=false. Elapsed: 3.470934ms
Aug 14 11:41:23.616: INFO: Pod "pod-projected-configmaps-9738d559-8b89-4071-a3a7-968874355385": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006789827s
Aug 14 11:41:25.620: INFO: Pod "pod-projected-configmaps-9738d559-8b89-4071-a3a7-968874355385": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011064843s
Aug 14 11:41:27.624: INFO: Pod "pod-projected-configmaps-9738d559-8b89-4071-a3a7-968874355385": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014600066s
STEP: Saw pod success
Aug 14 11:41:27.624: INFO: Pod "pod-projected-configmaps-9738d559-8b89-4071-a3a7-968874355385" satisfied condition "success or failure"
Aug 14 11:41:27.626: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-9738d559-8b89-4071-a3a7-968874355385 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 14 11:41:27.876: INFO: Waiting for pod pod-projected-configmaps-9738d559-8b89-4071-a3a7-968874355385 to disappear
Aug 14 11:41:27.947: INFO: Pod pod-projected-configmaps-9738d559-8b89-4071-a3a7-968874355385 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:41:27.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6507" for this suite.
Aug 14 11:41:38.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:41:39.446: INFO: namespace projected-6507 deletion completed in 11.495479006s

• [SLOW TEST:18.181 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:41:39.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:41:49.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9920" for this suite.
Aug 14 11:41:55.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:41:55.797: INFO: namespace namespaces-9920 deletion completed in 6.367042445s
STEP: Destroying namespace "nsdeletetest-7147" for this suite.
Aug 14 11:41:55.799: INFO: Namespace nsdeletetest-7147 was already deleted
STEP: Destroying namespace "nsdeletetest-605" for this suite.
Aug 14 11:42:01.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:42:01.886: INFO: namespace nsdeletetest-605 deletion completed in 6.087466386s

• [SLOW TEST:22.440 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:42:01.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 11:42:01.978: INFO: Creating deployment "test-recreate-deployment"
Aug 14 11:42:01.985: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 14 11:42:01.997: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 14 11:42:04.006: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 14 11:42:04.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002122, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002122, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002122, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002121, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 14 11:42:06.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002122, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002122, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002122, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002121, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 14 11:42:08.064: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 14 11:42:08.071: INFO: Updating deployment test-recreate-deployment
Aug 14 11:42:08.071: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 14 11:42:09.178: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-981,SelfLink:/apis/apps/v1/namespaces/deployment-981/deployments/test-recreate-deployment,UID:ddcad5c5-243f-42be-bbb6-cf5bf6bf666b,ResourceVersion:4881864,Generation:2,CreationTimestamp:2020-08-14 11:42:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-14 11:42:08 +0000 UTC 2020-08-14 11:42:08 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-14 11:42:09 +0000 UTC 2020-08-14 11:42:01 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Aug 14 11:42:09.182: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-981,SelfLink:/apis/apps/v1/namespaces/deployment-981/replicasets/test-recreate-deployment-5c8c9cc69d,UID:beb4d831-0513-4f1e-aaff-4fb655e4ac15,ResourceVersion:4881862,Generation:1,CreationTimestamp:2020-08-14 11:42:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ddcad5c5-243f-42be-bbb6-cf5bf6bf666b 0xc002807237 0xc002807238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 14 11:42:09.182: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 14 11:42:09.183: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-981,SelfLink:/apis/apps/v1/namespaces/deployment-981/replicasets/test-recreate-deployment-6df85df6b9,UID:3b7dea16-4824-4673-b923-8aa851ee2f4f,ResourceVersion:4881850,Generation:2,CreationTimestamp:2020-08-14 11:42:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ddcad5c5-243f-42be-bbb6-cf5bf6bf666b 0xc002807307 0xc002807308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 14 11:42:09.186: INFO: Pod "test-recreate-deployment-5c8c9cc69d-5zgpj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-5zgpj,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-981,SelfLink:/api/v1/namespaces/deployment-981/pods/test-recreate-deployment-5c8c9cc69d-5zgpj,UID:c5f3bfaa-b172-48e6-81e4-6d8e11be5f76,ResourceVersion:4881861,Generation:0,CreationTimestamp:2020-08-14 11:42:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d beb4d831-0513-4f1e-aaff-4fb655e4ac15 0xc0025deb37 0xc0025deb38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s6z6x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s6z6x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-s6z6x true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025debb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025debd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 11:42:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 11:42:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 11:42:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 11:42:08 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-14 11:42:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:42:09.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-981" for this suite.
Aug 14 11:42:17.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:42:17.901: INFO: namespace deployment-981 deletion completed in 8.710436295s

• [SLOW TEST:16.014 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:42:17.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-x5mc
STEP: Creating a pod to test atomic-volume-subpath
Aug 14 11:42:19.095: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x5mc" in namespace "subpath-2122" to be "success or failure"
Aug 14 11:42:19.111: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.434116ms
Aug 14 11:42:21.115: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019407304s
Aug 14 11:42:23.119: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023684481s
Aug 14 11:42:25.458: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.363088404s
Aug 14 11:42:27.462: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 8.36673643s
Aug 14 11:42:29.467: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 10.37162558s
Aug 14 11:42:31.471: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 12.375363059s
Aug 14 11:42:33.474: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 14.37897727s
Aug 14 11:42:35.479: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 16.383134784s
Aug 14 11:42:37.482: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 18.386618537s
Aug 14 11:42:39.485: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 20.390066302s
Aug 14 11:42:41.489: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 22.393418904s
Aug 14 11:42:43.493: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 24.397423855s
Aug 14 11:42:45.498: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 26.402503899s
Aug 14 11:42:47.555: INFO: Pod "pod-subpath-test-configmap-x5mc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.459256932s
STEP: Saw pod success
Aug 14 11:42:47.555: INFO: Pod "pod-subpath-test-configmap-x5mc" satisfied condition "success or failure"
Aug 14 11:42:47.557: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-x5mc container test-container-subpath-configmap-x5mc: 
STEP: delete the pod
Aug 14 11:42:47.694: INFO: Waiting for pod pod-subpath-test-configmap-x5mc to disappear
Aug 14 11:42:47.703: INFO: Pod pod-subpath-test-configmap-x5mc no longer exists
STEP: Deleting pod pod-subpath-test-configmap-x5mc
Aug 14 11:42:47.703: INFO: Deleting pod "pod-subpath-test-configmap-x5mc" in namespace "subpath-2122"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:42:47.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2122" for this suite.
Aug 14 11:42:54.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:42:54.348: INFO: namespace subpath-2122 deletion completed in 6.640478725s

• [SLOW TEST:36.448 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:42:54.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-e7509b30-5f75-4c05-8f2f-56620438989c
STEP: Creating a pod to test consume configMaps
Aug 14 11:42:54.797: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56d3a777-e576-429c-81ad-edaac4d61f98" in namespace "projected-2265" to be "success or failure"
Aug 14 11:42:54.849: INFO: Pod "pod-projected-configmaps-56d3a777-e576-429c-81ad-edaac4d61f98": Phase="Pending", Reason="", readiness=false. Elapsed: 51.111599ms
Aug 14 11:42:56.853: INFO: Pod "pod-projected-configmaps-56d3a777-e576-429c-81ad-edaac4d61f98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054973055s
Aug 14 11:42:58.857: INFO: Pod "pod-projected-configmaps-56d3a777-e576-429c-81ad-edaac4d61f98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059231828s
Aug 14 11:43:00.860: INFO: Pod "pod-projected-configmaps-56d3a777-e576-429c-81ad-edaac4d61f98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062636624s
STEP: Saw pod success
Aug 14 11:43:00.860: INFO: Pod "pod-projected-configmaps-56d3a777-e576-429c-81ad-edaac4d61f98" satisfied condition "success or failure"
Aug 14 11:43:00.863: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-56d3a777-e576-429c-81ad-edaac4d61f98 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 14 11:43:01.256: INFO: Waiting for pod pod-projected-configmaps-56d3a777-e576-429c-81ad-edaac4d61f98 to disappear
Aug 14 11:43:01.632: INFO: Pod pod-projected-configmaps-56d3a777-e576-429c-81ad-edaac4d61f98 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:43:01.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2265" for this suite.
Aug 14 11:43:09.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:43:10.685: INFO: namespace projected-2265 deletion completed in 9.047212685s

• [SLOW TEST:16.336 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:43:10.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9531.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9531.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9531.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9531.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9531.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9531.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 14 11:43:24.340: INFO: DNS probes using dns-9531/dns-test-9148f115-2b40-4ebc-b72c-5c70d261bd19 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:43:24.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9531" for this suite.
Aug 14 11:43:30.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:43:30.587: INFO: namespace dns-9531 deletion completed in 6.133251114s

• [SLOW TEST:19.902 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:43:30.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 14 11:43:30.628: INFO: PodSpec: initContainers in spec.initContainers
Aug 14 11:44:32.261: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4e4ce609-3291-4050-a06b-94d140dd67c4", GenerateName:"", Namespace:"init-container-4454", SelfLink:"/api/v1/namespaces/init-container-4454/pods/pod-init-4e4ce609-3291-4050-a06b-94d140dd67c4", UID:"f7c96fdb-d94c-4fa5-b0bd-f8d7ea92cbfd", ResourceVersion:"4882285", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733002210, loc:(*time.Location)(0x7eb18c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"628070372"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fmdht", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000cd4280), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fmdht", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fmdht", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fmdht", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003356f58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001640720), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003356fe0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003357000)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003357008), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00335700c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002210, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002210, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002210, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733002210, loc:(*time.Location)(0x7eb18c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"10.244.2.64", StartTime:(*v1.Time)(0xc001971b00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001971b40), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f933b0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://6b9b9d95c4dbccef610aa9c04470d65b494c48b78bc77e44e6934dd57c80c4e8"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001971b60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001971b20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:44:32.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4454" for this suite.
Aug 14 11:44:54.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:44:55.235: INFO: namespace init-container-4454 deletion completed in 22.617631594s

• [SLOW TEST:84.648 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:44:55.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-99vcr in namespace proxy-9910
I0814 11:44:56.420685       6 runners.go:180] Created replication controller with name: proxy-service-99vcr, namespace: proxy-9910, replica count: 1
I0814 11:44:57.471296       6 runners.go:180] proxy-service-99vcr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0814 11:44:58.471514       6 runners.go:180] proxy-service-99vcr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0814 11:44:59.471758       6 runners.go:180] proxy-service-99vcr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0814 11:45:00.471969       6 runners.go:180] proxy-service-99vcr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0814 11:45:01.472178       6 runners.go:180] proxy-service-99vcr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0814 11:45:02.472492       6 runners.go:180] proxy-service-99vcr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0814 11:45:03.472674       6 runners.go:180] proxy-service-99vcr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0814 11:45:04.472937       6 runners.go:180] proxy-service-99vcr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0814 11:45:05.473142       6 runners.go:180] proxy-service-99vcr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 14 11:45:05.476: INFO: setup took 9.44628615s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 14 11:45:05.485: INFO: (0) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 9.035768ms)
Aug 14 11:45:05.486: INFO: (0) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 9.38448ms)
Aug 14 11:45:05.486: INFO: (0) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 9.409855ms)
Aug 14 11:45:05.486: INFO: (0) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 9.487052ms)
Aug 14 11:45:05.486: INFO: (0) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 9.583076ms)
Aug 14 11:45:05.486: INFO: (0) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 9.591108ms)
Aug 14 11:45:05.486: INFO: (0) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 9.846964ms)
Aug 14 11:45:05.486: INFO: (0) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 9.976962ms)
Aug 14 11:45:05.486: INFO: (0) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 9.968849ms)
Aug 14 11:45:05.486: INFO: (0) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 9.734239ms)
Aug 14 11:45:05.487: INFO: (0) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 10.174234ms)
Aug 14 11:45:05.771: INFO: (0) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 294.964242ms)
Aug 14 11:45:05.771: INFO: (0) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test<... (200; 4.414469ms)
Aug 14 11:45:05.777: INFO: (1) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 5.15162ms)
Aug 14 11:45:05.777: INFO: (1) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 5.861912ms)
Aug 14 11:45:05.778: INFO: (1) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 5.990621ms)
Aug 14 11:45:05.778: INFO: (1) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 6.11408ms)
Aug 14 11:45:05.778: INFO: (1) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 6.061565ms)
Aug 14 11:45:05.778: INFO: (1) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 6.251919ms)
Aug 14 11:45:05.778: INFO: (1) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 6.371858ms)
Aug 14 11:45:05.778: INFO: (1) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 6.543209ms)
Aug 14 11:45:05.778: INFO: (1) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 6.46491ms)
Aug 14 11:45:05.778: INFO: (1) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: ... (200; 6.948942ms)
Aug 14 11:45:05.783: INFO: (2) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 3.902238ms)
Aug 14 11:45:05.783: INFO: (2) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test (200; 4.074953ms)
Aug 14 11:45:05.783: INFO: (2) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 4.144133ms)
Aug 14 11:45:05.783: INFO: (2) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.15297ms)
Aug 14 11:45:05.783: INFO: (2) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 4.116647ms)
Aug 14 11:45:05.783: INFO: (2) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 4.195249ms)
Aug 14 11:45:05.784: INFO: (2) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.735862ms)
Aug 14 11:45:05.784: INFO: (2) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 4.979882ms)
Aug 14 11:45:05.784: INFO: (2) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 4.955931ms)
Aug 14 11:45:05.784: INFO: (2) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 5.171857ms)
Aug 14 11:45:05.784: INFO: (2) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 5.224959ms)
Aug 14 11:45:05.784: INFO: (2) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 5.237393ms)
Aug 14 11:45:05.788: INFO: (3) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.349964ms)
Aug 14 11:45:05.789: INFO: (3) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.647927ms)
Aug 14 11:45:05.789: INFO: (3) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.652963ms)
Aug 14 11:45:05.789: INFO: (3) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 5.018615ms)
Aug 14 11:45:05.789: INFO: (3) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 4.989577ms)
Aug 14 11:45:05.789: INFO: (3) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: ... (200; 5.884777ms)
Aug 14 11:45:05.790: INFO: (3) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 6.021905ms)
Aug 14 11:45:05.790: INFO: (3) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 6.220502ms)
Aug 14 11:45:05.790: INFO: (3) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 6.222287ms)
Aug 14 11:45:05.790: INFO: (3) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 6.104626ms)
Aug 14 11:45:05.790: INFO: (3) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 6.060254ms)
Aug 14 11:45:05.790: INFO: (3) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 6.214387ms)
Aug 14 11:45:05.791: INFO: (3) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 6.313725ms)
Aug 14 11:45:05.794: INFO: (4) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 3.530005ms)
Aug 14 11:45:05.794: INFO: (4) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 3.800815ms)
Aug 14 11:45:05.795: INFO: (4) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 3.818417ms)
Aug 14 11:45:05.795: INFO: (4) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 3.838382ms)
Aug 14 11:45:05.795: INFO: (4) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 3.941558ms)
Aug 14 11:45:05.795: INFO: (4) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 4.21237ms)
Aug 14 11:45:05.795: INFO: (4) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 4.089367ms)
Aug 14 11:45:05.795: INFO: (4) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 4.109042ms)
Aug 14 11:45:05.795: INFO: (4) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.10042ms)
Aug 14 11:45:05.795: INFO: (4) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 4.125417ms)
Aug 14 11:45:05.795: INFO: (4) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: ... (200; 4.560603ms)
Aug 14 11:45:05.795: INFO: (4) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.501387ms)
Aug 14 11:45:05.799: INFO: (5) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 3.299384ms)
Aug 14 11:45:05.799: INFO: (5) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.008265ms)
Aug 14 11:45:05.799: INFO: (5) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 4.012578ms)
Aug 14 11:45:05.799: INFO: (5) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 4.219263ms)
Aug 14 11:45:05.799: INFO: (5) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 4.199501ms)
Aug 14 11:45:05.799: INFO: (5) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 4.236389ms)
Aug 14 11:45:05.800: INFO: (5) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test (200; 4.362565ms)
Aug 14 11:45:05.800: INFO: (5) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 4.429882ms)
Aug 14 11:45:05.800: INFO: (5) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.485745ms)
Aug 14 11:45:05.800: INFO: (5) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.757718ms)
Aug 14 11:45:05.800: INFO: (5) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 4.800033ms)
Aug 14 11:45:05.800: INFO: (5) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 4.79875ms)
Aug 14 11:45:05.800: INFO: (5) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 4.824825ms)
Aug 14 11:45:05.801: INFO: (5) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 5.554856ms)
Aug 14 11:45:05.804: INFO: (6) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 2.596438ms)
Aug 14 11:45:05.804: INFO: (6) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test (200; 4.095206ms)
Aug 14 11:45:05.805: INFO: (6) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 4.122009ms)
Aug 14 11:45:05.806: INFO: (6) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.858586ms)
Aug 14 11:45:05.806: INFO: (6) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 4.826337ms)
Aug 14 11:45:05.806: INFO: (6) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.960559ms)
Aug 14 11:45:05.806: INFO: (6) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 4.931092ms)
Aug 14 11:45:05.806: INFO: (6) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 4.961621ms)
Aug 14 11:45:05.806: INFO: (6) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 5.265444ms)
Aug 14 11:45:05.806: INFO: (6) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 5.246166ms)
Aug 14 11:45:05.806: INFO: (6) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 5.263118ms)
Aug 14 11:45:05.806: INFO: (6) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 5.446352ms)
Aug 14 11:45:05.808: INFO: (6) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 6.82077ms)
Aug 14 11:45:05.810: INFO: (7) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 2.197908ms)
Aug 14 11:45:05.810: INFO: (7) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 2.347765ms)
Aug 14 11:45:05.810: INFO: (7) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 2.490453ms)
Aug 14 11:45:05.811: INFO: (7) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 3.480348ms)
Aug 14 11:45:05.812: INFO: (7) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 3.640074ms)
Aug 14 11:45:05.812: INFO: (7) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 3.693595ms)
Aug 14 11:45:05.812: INFO: (7) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 3.824243ms)
Aug 14 11:45:05.812: INFO: (7) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 3.918206ms)
Aug 14 11:45:05.812: INFO: (7) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 4.056146ms)
Aug 14 11:45:05.812: INFO: (7) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 4.233344ms)
Aug 14 11:45:05.812: INFO: (7) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 4.227109ms)
Aug 14 11:45:05.812: INFO: (7) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 4.20328ms)
Aug 14 11:45:05.812: INFO: (7) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 4.259715ms)
Aug 14 11:45:05.812: INFO: (7) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.251956ms)
Aug 14 11:45:05.812: INFO: (7) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test<... (200; 3.798072ms)
Aug 14 11:45:05.816: INFO: (8) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 3.855175ms)
Aug 14 11:45:05.816: INFO: (8) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: ... (200; 4.770201ms)
Aug 14 11:45:05.817: INFO: (8) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 4.721443ms)
Aug 14 11:45:05.818: INFO: (8) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 5.654296ms)
Aug 14 11:45:05.818: INFO: (8) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 5.913959ms)
Aug 14 11:45:05.818: INFO: (8) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 5.857274ms)
Aug 14 11:45:05.819: INFO: (8) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 5.979588ms)
Aug 14 11:45:05.819: INFO: (8) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 6.044754ms)
Aug 14 11:45:05.819: INFO: (8) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 6.076524ms)
Aug 14 11:45:05.819: INFO: (8) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 6.321759ms)
Aug 14 11:45:05.822: INFO: (9) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 2.861326ms)
Aug 14 11:45:05.822: INFO: (9) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test<... (200; 3.285406ms)
Aug 14 11:45:05.822: INFO: (9) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 3.216172ms)
Aug 14 11:45:05.822: INFO: (9) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 3.271998ms)
Aug 14 11:45:05.822: INFO: (9) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 3.273395ms)
Aug 14 11:45:05.822: INFO: (9) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 3.30663ms)
Aug 14 11:45:05.823: INFO: (9) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 3.869516ms)
Aug 14 11:45:05.823: INFO: (9) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 3.904817ms)
Aug 14 11:45:05.823: INFO: (9) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 3.932857ms)
Aug 14 11:45:05.823: INFO: (9) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 4.333581ms)
Aug 14 11:45:05.823: INFO: (9) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 4.366469ms)
Aug 14 11:45:05.823: INFO: (9) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 4.453305ms)
Aug 14 11:45:05.824: INFO: (9) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 4.574952ms)
Aug 14 11:45:05.824: INFO: (9) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 4.606137ms)
Aug 14 11:45:05.824: INFO: (9) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 4.727576ms)
Aug 14 11:45:05.826: INFO: (10) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 2.802445ms)
Aug 14 11:45:05.827: INFO: (10) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 3.419844ms)
Aug 14 11:45:05.828: INFO: (10) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.187792ms)
Aug 14 11:45:05.828: INFO: (10) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 4.108687ms)
Aug 14 11:45:05.828: INFO: (10) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 4.238762ms)
Aug 14 11:45:05.828: INFO: (10) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 4.252283ms)
Aug 14 11:45:05.828: INFO: (10) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 4.339875ms)
Aug 14 11:45:05.828: INFO: (10) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 4.36971ms)
Aug 14 11:45:05.828: INFO: (10) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 4.322055ms)
Aug 14 11:45:05.828: INFO: (10) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 4.297965ms)
Aug 14 11:45:05.828: INFO: (10) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 4.310088ms)
Aug 14 11:45:05.828: INFO: (10) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test<... (200; 2.565219ms)
Aug 14 11:45:05.832: INFO: (11) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 3.832538ms)
Aug 14 11:45:05.832: INFO: (11) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 3.944825ms)
Aug 14 11:45:05.832: INFO: (11) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 4.010923ms)
Aug 14 11:45:05.832: INFO: (11) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 4.051171ms)
Aug 14 11:45:05.832: INFO: (11) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.019429ms)
Aug 14 11:45:05.832: INFO: (11) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 4.041335ms)
Aug 14 11:45:05.832: INFO: (11) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 4.051071ms)
Aug 14 11:45:05.833: INFO: (11) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 4.052577ms)
Aug 14 11:45:05.833: INFO: (11) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: ... (200; 4.0692ms)
Aug 14 11:45:05.833: INFO: (11) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 4.155482ms)
Aug 14 11:45:05.833: INFO: (11) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.247452ms)
Aug 14 11:45:05.833: INFO: (11) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.239067ms)
Aug 14 11:45:05.836: INFO: (12) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 2.762841ms)
Aug 14 11:45:05.836: INFO: (12) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 2.93851ms)
Aug 14 11:45:05.836: INFO: (12) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 3.006319ms)
Aug 14 11:45:05.836: INFO: (12) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 2.992642ms)
Aug 14 11:45:05.836: INFO: (12) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 3.029636ms)
Aug 14 11:45:05.836: INFO: (12) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 3.199862ms)
Aug 14 11:45:05.837: INFO: (12) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 3.79975ms)
Aug 14 11:45:05.837: INFO: (12) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 3.847971ms)
Aug 14 11:45:05.837: INFO: (12) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 3.881545ms)
Aug 14 11:45:05.837: INFO: (12) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test<... (200; 5.741699ms)
Aug 14 11:45:05.843: INFO: (13) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 5.759813ms)
Aug 14 11:45:05.844: INFO: (13) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 6.128029ms)
Aug 14 11:45:05.844: INFO: (13) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 6.04681ms)
Aug 14 11:45:05.844: INFO: (13) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 6.056826ms)
Aug 14 11:45:05.844: INFO: (13) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 6.09662ms)
Aug 14 11:45:05.844: INFO: (13) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 6.169584ms)
Aug 14 11:45:05.844: INFO: (13) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 6.116447ms)
Aug 14 11:45:05.844: INFO: (13) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 6.177628ms)
Aug 14 11:45:05.844: INFO: (13) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 6.169448ms)
Aug 14 11:45:05.844: INFO: (13) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 6.237733ms)
Aug 14 11:45:05.846: INFO: (14) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 2.32374ms)
Aug 14 11:45:05.846: INFO: (14) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test<... (200; 4.747327ms)
Aug 14 11:45:05.849: INFO: (14) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 4.969148ms)
Aug 14 11:45:05.849: INFO: (14) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 5.142956ms)
Aug 14 11:45:05.849: INFO: (14) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 5.264853ms)
Aug 14 11:45:05.849: INFO: (14) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 5.343331ms)
Aug 14 11:45:05.849: INFO: (14) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 5.323323ms)
Aug 14 11:45:05.850: INFO: (14) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 6.031601ms)
Aug 14 11:45:05.850: INFO: (14) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 6.289193ms)
Aug 14 11:45:05.850: INFO: (14) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 6.335678ms)
Aug 14 11:45:05.850: INFO: (14) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 6.278385ms)
Aug 14 11:45:05.850: INFO: (14) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 6.332967ms)
Aug 14 11:45:05.850: INFO: (14) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 6.392584ms)
Aug 14 11:45:05.853: INFO: (15) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 2.312457ms)
Aug 14 11:45:05.853: INFO: (15) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 2.35011ms)
Aug 14 11:45:05.854: INFO: (15) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 3.587004ms)
Aug 14 11:45:05.855: INFO: (15) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.280685ms)
Aug 14 11:45:05.855: INFO: (15) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.316286ms)
Aug 14 11:45:05.855: INFO: (15) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.334702ms)
Aug 14 11:45:05.855: INFO: (15) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 4.366515ms)
Aug 14 11:45:05.855: INFO: (15) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 4.407531ms)
Aug 14 11:45:05.855: INFO: (15) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 4.373099ms)
Aug 14 11:45:05.855: INFO: (15) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 4.433567ms)
Aug 14 11:45:05.855: INFO: (15) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.341959ms)
Aug 14 11:45:05.855: INFO: (15) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test (200; 2.02952ms)
Aug 14 11:45:05.858: INFO: (16) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 3.164894ms)
Aug 14 11:45:05.859: INFO: (16) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 3.521646ms)
Aug 14 11:45:05.859: INFO: (16) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 3.783965ms)
Aug 14 11:45:05.859: INFO: (16) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 3.781381ms)
Aug 14 11:45:05.859: INFO: (16) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 3.889089ms)
Aug 14 11:45:05.860: INFO: (16) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 4.307591ms)
Aug 14 11:45:05.860: INFO: (16) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 4.366054ms)
Aug 14 11:45:05.860: INFO: (16) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 4.292192ms)
Aug 14 11:45:05.860: INFO: (16) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 4.332633ms)
Aug 14 11:45:05.860: INFO: (16) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 4.326624ms)
Aug 14 11:45:05.860: INFO: (16) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 4.475207ms)
Aug 14 11:45:05.860: INFO: (16) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test<... (200; 4.474313ms)
Aug 14 11:45:05.860: INFO: (16) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.560675ms)
Aug 14 11:45:05.864: INFO: (17) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 4.386864ms)
Aug 14 11:45:05.864: INFO: (17) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.484158ms)
Aug 14 11:45:05.864: INFO: (17) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 4.469253ms)
Aug 14 11:45:05.864: INFO: (17) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 4.432782ms)
Aug 14 11:45:05.864: INFO: (17) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 4.484941ms)
Aug 14 11:45:05.864: INFO: (17) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 4.453628ms)
Aug 14 11:45:05.864: INFO: (17) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.508245ms)
Aug 14 11:45:05.864: INFO: (17) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: ... (200; 4.466349ms)
Aug 14 11:45:05.864: INFO: (17) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 4.515911ms)
Aug 14 11:45:05.865: INFO: (17) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 5.414583ms)
Aug 14 11:45:05.866: INFO: (17) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 5.642781ms)
Aug 14 11:45:05.866: INFO: (17) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 5.777763ms)
Aug 14 11:45:05.866: INFO: (17) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 5.704012ms)
Aug 14 11:45:05.866: INFO: (17) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 5.732472ms)
Aug 14 11:45:05.869: INFO: (18) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 2.7375ms)
Aug 14 11:45:05.869: INFO: (18) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:1080/proxy/: test<... (200; 3.261027ms)
Aug 14 11:45:05.869: INFO: (18) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:460/proxy/: tls baz (200; 3.287379ms)
Aug 14 11:45:05.869: INFO: (18) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:162/proxy/: bar (200; 3.247311ms)
Aug 14 11:45:05.869: INFO: (18) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:162/proxy/: bar (200; 3.253843ms)
Aug 14 11:45:05.869: INFO: (18) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 3.332588ms)
Aug 14 11:45:05.869: INFO: (18) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:160/proxy/: foo (200; 3.269869ms)
Aug 14 11:45:05.869: INFO: (18) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:462/proxy/: tls qux (200; 3.324041ms)
Aug 14 11:45:05.869: INFO: (18) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 3.325252ms)
Aug 14 11:45:05.870: INFO: (18) /api/v1/namespaces/proxy-9910/pods/https:proxy-service-99vcr-gvzts:443/proxy/: test<... (200; 3.48315ms)
Aug 14 11:45:05.874: INFO: (19) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts/proxy/: test (200; 3.478289ms)
Aug 14 11:45:05.874: INFO: (19) /api/v1/namespaces/proxy-9910/pods/proxy-service-99vcr-gvzts:160/proxy/: foo (200; 3.647828ms)
Aug 14 11:45:05.874: INFO: (19) /api/v1/namespaces/proxy-9910/pods/http:proxy-service-99vcr-gvzts:1080/proxy/: ... (200; 3.845984ms)
Aug 14 11:45:05.874: INFO: (19) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname1/proxy/: foo (200; 4.008958ms)
Aug 14 11:45:05.875: INFO: (19) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname2/proxy/: bar (200; 4.292861ms)
Aug 14 11:45:05.875: INFO: (19) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname1/proxy/: tls baz (200; 4.442278ms)
Aug 14 11:45:05.875: INFO: (19) /api/v1/namespaces/proxy-9910/services/proxy-service-99vcr:portname1/proxy/: foo (200; 4.417594ms)
Aug 14 11:45:05.875: INFO: (19) /api/v1/namespaces/proxy-9910/services/https:proxy-service-99vcr:tlsportname2/proxy/: tls qux (200; 4.513699ms)
Aug 14 11:45:05.875: INFO: (19) /api/v1/namespaces/proxy-9910/services/http:proxy-service-99vcr:portname2/proxy/: bar (200; 4.458438ms)
STEP: deleting ReplicationController proxy-service-99vcr in namespace proxy-9910, will wait for the garbage collector to delete the pods
Aug 14 11:45:05.933: INFO: Deleting ReplicationController proxy-service-99vcr took: 6.524209ms
Aug 14 11:45:06.233: INFO: Terminating ReplicationController proxy-service-99vcr pods took: 300.204082ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:45:15.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9910" for this suite.
Aug 14 11:45:21.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:45:21.232: INFO: namespace proxy-9910 deletion completed in 6.09332343s

• [SLOW TEST:25.996 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:45:21.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3534
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 14 11:45:21.630: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 14 11:45:52.552: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.66:8080/dial?request=hostName&protocol=http&host=10.244.1.127&port=8080&tries=1'] Namespace:pod-network-test-3534 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:45:52.552: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:45:52.987331       6 log.go:172] (0xc00079db80) (0xc0018c2640) Create stream
I0814 11:45:52.987368       6 log.go:172] (0xc00079db80) (0xc0018c2640) Stream added, broadcasting: 1
I0814 11:45:52.989737       6 log.go:172] (0xc00079db80) Reply frame received for 1
I0814 11:45:52.989778       6 log.go:172] (0xc00079db80) (0xc00174fd60) Create stream
I0814 11:45:52.989785       6 log.go:172] (0xc00079db80) (0xc00174fd60) Stream added, broadcasting: 3
I0814 11:45:52.990790       6 log.go:172] (0xc00079db80) Reply frame received for 3
I0814 11:45:52.990819       6 log.go:172] (0xc00079db80) (0xc00174fea0) Create stream
I0814 11:45:52.990824       6 log.go:172] (0xc00079db80) (0xc00174fea0) Stream added, broadcasting: 5
I0814 11:45:52.991634       6 log.go:172] (0xc00079db80) Reply frame received for 5
I0814 11:45:53.124544       6 log.go:172] (0xc00079db80) Data frame received for 3
I0814 11:45:53.124570       6 log.go:172] (0xc00174fd60) (3) Data frame handling
I0814 11:45:53.124586       6 log.go:172] (0xc00174fd60) (3) Data frame sent
I0814 11:45:53.125449       6 log.go:172] (0xc00079db80) Data frame received for 5
I0814 11:45:53.125469       6 log.go:172] (0xc00174fea0) (5) Data frame handling
I0814 11:45:53.125513       6 log.go:172] (0xc00079db80) Data frame received for 3
I0814 11:45:53.125541       6 log.go:172] (0xc00174fd60) (3) Data frame handling
I0814 11:45:53.127331       6 log.go:172] (0xc00079db80) Data frame received for 1
I0814 11:45:53.127346       6 log.go:172] (0xc0018c2640) (1) Data frame handling
I0814 11:45:53.127354       6 log.go:172] (0xc0018c2640) (1) Data frame sent
I0814 11:45:53.127364       6 log.go:172] (0xc00079db80) (0xc0018c2640) Stream removed, broadcasting: 1
I0814 11:45:53.127386       6 log.go:172] (0xc00079db80) Go away received
I0814 11:45:53.127516       6 log.go:172] (0xc00079db80) (0xc0018c2640) Stream removed, broadcasting: 1
I0814 11:45:53.127535       6 log.go:172] (0xc00079db80) (0xc00174fd60) Stream removed, broadcasting: 3
I0814 11:45:53.127547       6 log.go:172] (0xc00079db80) (0xc00174fea0) Stream removed, broadcasting: 5
Aug 14 11:45:53.127: INFO: Waiting for endpoints: map[]
Aug 14 11:45:53.180: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.66:8080/dial?request=hostName&protocol=http&host=10.244.2.65&port=8080&tries=1'] Namespace:pod-network-test-3534 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 11:45:53.181: INFO: >>> kubeConfig: /root/.kube/config
I0814 11:45:53.651773       6 log.go:172] (0xc001e47290) (0xc003184fa0) Create stream
I0814 11:45:53.651810       6 log.go:172] (0xc001e47290) (0xc003184fa0) Stream added, broadcasting: 1
I0814 11:45:53.653491       6 log.go:172] (0xc001e47290) Reply frame received for 1
I0814 11:45:53.653530       6 log.go:172] (0xc001e47290) (0xc003185040) Create stream
I0814 11:45:53.653541       6 log.go:172] (0xc001e47290) (0xc003185040) Stream added, broadcasting: 3
I0814 11:45:53.654175       6 log.go:172] (0xc001e47290) Reply frame received for 3
I0814 11:45:53.654204       6 log.go:172] (0xc001e47290) (0xc0031850e0) Create stream
I0814 11:45:53.654214       6 log.go:172] (0xc001e47290) (0xc0031850e0) Stream added, broadcasting: 5
I0814 11:45:53.654821       6 log.go:172] (0xc001e47290) Reply frame received for 5
I0814 11:45:53.727067       6 log.go:172] (0xc001e47290) Data frame received for 3
I0814 11:45:53.727101       6 log.go:172] (0xc003185040) (3) Data frame handling
I0814 11:45:53.727116       6 log.go:172] (0xc003185040) (3) Data frame sent
I0814 11:45:53.727491       6 log.go:172] (0xc001e47290) Data frame received for 5
I0814 11:45:53.727514       6 log.go:172] (0xc0031850e0) (5) Data frame handling
I0814 11:45:53.727556       6 log.go:172] (0xc001e47290) Data frame received for 3
I0814 11:45:53.727589       6 log.go:172] (0xc003185040) (3) Data frame handling
I0814 11:45:53.728934       6 log.go:172] (0xc001e47290) Data frame received for 1
I0814 11:45:53.728957       6 log.go:172] (0xc003184fa0) (1) Data frame handling
I0814 11:45:53.728982       6 log.go:172] (0xc003184fa0) (1) Data frame sent
I0814 11:45:53.728997       6 log.go:172] (0xc001e47290) (0xc003184fa0) Stream removed, broadcasting: 1
I0814 11:45:53.729007       6 log.go:172] (0xc001e47290) Go away received
I0814 11:45:53.729137       6 log.go:172] (0xc001e47290) (0xc003184fa0) Stream removed, broadcasting: 1
I0814 11:45:53.729165       6 log.go:172] (0xc001e47290) (0xc003185040) Stream removed, broadcasting: 3
I0814 11:45:53.729174       6 log.go:172] (0xc001e47290) (0xc0031850e0) Stream removed, broadcasting: 5
Aug 14 11:45:53.729: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:45:53.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3534" for this suite.
Aug 14 11:46:17.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:46:18.052: INFO: namespace pod-network-test-3534 deletion completed in 24.122935944s

• [SLOW TEST:56.821 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:46:18.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Aug 14 11:46:18.140: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix863138712/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:46:18.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4466" for this suite.
Aug 14 11:46:24.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:46:24.713: INFO: namespace kubectl-4466 deletion completed in 6.502303226s

• [SLOW TEST:6.659 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:46:24.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5849
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-5849
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5849
Aug 14 11:46:25.211: INFO: Found 0 stateful pods, waiting for 1
Aug 14 11:46:35.944: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Aug 14 11:46:45.328: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 14 11:46:45.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 14 11:46:57.278: INFO: stderr: "I0814 11:46:57.105579    2431 log.go:172] (0xc000744b00) (0xc0004108c0) Create stream\nI0814 11:46:57.105620    2431 log.go:172] (0xc000744b00) (0xc0004108c0) Stream added, broadcasting: 1\nI0814 11:46:57.108024    2431 log.go:172] (0xc000744b00) Reply frame received for 1\nI0814 11:46:57.108085    2431 log.go:172] (0xc000744b00) (0xc00062a000) Create stream\nI0814 11:46:57.108109    2431 log.go:172] (0xc000744b00) (0xc00062a000) Stream added, broadcasting: 3\nI0814 11:46:57.109325    2431 log.go:172] (0xc000744b00) Reply frame received for 3\nI0814 11:46:57.109366    2431 log.go:172] (0xc000744b00) (0xc000410960) Create stream\nI0814 11:46:57.109379    2431 log.go:172] (0xc000744b00) (0xc000410960) Stream added, broadcasting: 5\nI0814 11:46:57.110254    2431 log.go:172] (0xc000744b00) Reply frame received for 5\nI0814 11:46:57.187113    2431 log.go:172] (0xc000744b00) Data frame received for 5\nI0814 11:46:57.187138    2431 log.go:172] (0xc000410960) (5) Data frame handling\nI0814 11:46:57.187152    2431 log.go:172] (0xc000410960) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0814 11:46:57.264474    2431 log.go:172] (0xc000744b00) Data frame received for 3\nI0814 11:46:57.264518    2431 log.go:172] (0xc00062a000) (3) Data frame handling\nI0814 11:46:57.264542    2431 log.go:172] (0xc00062a000) (3) Data frame sent\nI0814 11:46:57.264568    2431 log.go:172] (0xc000744b00) Data frame received for 5\nI0814 11:46:57.264589    2431 log.go:172] (0xc000410960) (5) Data frame handling\nI0814 11:46:57.264626    2431 log.go:172] (0xc000744b00) Data frame received for 3\nI0814 11:46:57.264654    2431 log.go:172] (0xc00062a000) (3) Data frame handling\nI0814 11:46:57.266747    2431 log.go:172] (0xc000744b00) Data frame received for 1\nI0814 11:46:57.266763    2431 log.go:172] (0xc0004108c0) (1) Data frame handling\nI0814 11:46:57.266776    2431 log.go:172] (0xc0004108c0) (1) Data frame sent\nI0814 11:46:57.266793    2431 log.go:172] (0xc000744b00) (0xc0004108c0) Stream removed, broadcasting: 1\nI0814 11:46:57.267086    2431 log.go:172] (0xc000744b00) Go away received\nI0814 11:46:57.267130    2431 log.go:172] (0xc000744b00) (0xc0004108c0) Stream removed, broadcasting: 1\nI0814 11:46:57.267157    2431 log.go:172] (0xc000744b00) (0xc00062a000) Stream removed, broadcasting: 3\nI0814 11:46:57.267172    2431 log.go:172] (0xc000744b00) (0xc000410960) Stream removed, broadcasting: 5\n"
Aug 14 11:46:57.278: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 14 11:46:57.278: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 14 11:46:57.289: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 14 11:47:07.313: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 11:47:07.313: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 11:47:07.541: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998868s
Aug 14 11:47:08.625: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.865279306s
Aug 14 11:47:09.629: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.782226362s
Aug 14 11:47:11.008: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.777912815s
Aug 14 11:47:12.200: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.398612303s
Aug 14 11:47:13.223: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.207012212s
Aug 14 11:47:14.272: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.183595563s
Aug 14 11:47:15.307: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.134895736s
Aug 14 11:47:16.451: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.099551367s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5849
Aug 14 11:47:17.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:47:17.996: INFO: stderr: "I0814 11:47:17.609298    2461 log.go:172] (0xc0008b2420) (0xc000455680) Create stream\nI0814 11:47:17.609358    2461 log.go:172] (0xc0008b2420) (0xc000455680) Stream added, broadcasting: 1\nI0814 11:47:17.611498    2461 log.go:172] (0xc0008b2420) Reply frame received for 1\nI0814 11:47:17.611543    2461 log.go:172] (0xc0008b2420) (0xc000554140) Create stream\nI0814 11:47:17.611572    2461 log.go:172] (0xc0008b2420) (0xc000554140) Stream added, broadcasting: 3\nI0814 11:47:17.612415    2461 log.go:172] (0xc0008b2420) Reply frame received for 3\nI0814 11:47:17.612460    2461 log.go:172] (0xc0008b2420) (0xc00054a820) Create stream\nI0814 11:47:17.612474    2461 log.go:172] (0xc0008b2420) (0xc00054a820) Stream added, broadcasting: 5\nI0814 11:47:17.613303    2461 log.go:172] (0xc0008b2420) Reply frame received for 5\nI0814 11:47:17.686480    2461 log.go:172] (0xc0008b2420) Data frame received for 5\nI0814 11:47:17.686515    2461 log.go:172] (0xc00054a820) (5) Data frame handling\nI0814 11:47:17.686541    2461 log.go:172] (0xc00054a820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0814 11:47:17.989756    2461 log.go:172] (0xc0008b2420) Data frame received for 3\nI0814 11:47:17.989782    2461 log.go:172] (0xc000554140) (3) Data frame handling\nI0814 11:47:17.989795    2461 log.go:172] (0xc000554140) (3) Data frame sent\nI0814 11:47:17.989966    2461 log.go:172] (0xc0008b2420) Data frame received for 3\nI0814 11:47:17.989980    2461 log.go:172] (0xc000554140) (3) Data frame handling\nI0814 11:47:17.990361    2461 log.go:172] (0xc0008b2420) Data frame received for 5\nI0814 11:47:17.990374    2461 log.go:172] (0xc00054a820) (5) Data frame handling\nI0814 11:47:17.991938    2461 log.go:172] (0xc0008b2420) Data frame received for 1\nI0814 11:47:17.991951    2461 log.go:172] (0xc000455680) (1) Data frame handling\nI0814 11:47:17.991963    2461 log.go:172] (0xc000455680) (1) Data frame sent\nI0814 11:47:17.991979    2461 log.go:172] (0xc0008b2420) (0xc000455680) Stream removed, broadcasting: 1\nI0814 11:47:17.991996    2461 log.go:172] (0xc0008b2420) Go away received\nI0814 11:47:17.992291    2461 log.go:172] (0xc0008b2420) (0xc000455680) Stream removed, broadcasting: 1\nI0814 11:47:17.992308    2461 log.go:172] (0xc0008b2420) (0xc000554140) Stream removed, broadcasting: 3\nI0814 11:47:17.992314    2461 log.go:172] (0xc0008b2420) (0xc00054a820) Stream removed, broadcasting: 5\n"
Aug 14 11:47:17.997: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 14 11:47:17.997: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 14 11:47:18.085: INFO: Found 1 stateful pods, waiting for 3
Aug 14 11:47:28.090: INFO: Found 2 stateful pods, waiting for 3
Aug 14 11:47:38.095: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:47:38.095: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 11:47:38.095: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 14 11:47:38.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 14 11:47:38.386: INFO: stderr: "I0814 11:47:38.289827    2480 log.go:172] (0xc000116fd0) (0xc00072abe0) Create stream\nI0814 11:47:38.289897    2480 log.go:172] (0xc000116fd0) (0xc00072abe0) Stream added, broadcasting: 1\nI0814 11:47:38.292321    2480 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0814 11:47:38.292387    2480 log.go:172] (0xc000116fd0) (0xc000a46000) Create stream\nI0814 11:47:38.292696    2480 log.go:172] (0xc000116fd0) (0xc000a46000) Stream added, broadcasting: 3\nI0814 11:47:38.294217    2480 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0814 11:47:38.294261    2480 log.go:172] (0xc000116fd0) (0xc000a460a0) Create stream\nI0814 11:47:38.294276    2480 log.go:172] (0xc000116fd0) (0xc000a460a0) Stream added, broadcasting: 5\nI0814 11:47:38.295068    2480 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0814 11:47:38.355089    2480 log.go:172] (0xc000116fd0) Data frame received for 5\nI0814 11:47:38.355121    2480 log.go:172] (0xc000a460a0) (5) Data frame handling\nI0814 11:47:38.355140    2480 log.go:172] (0xc000a460a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0814 11:47:38.376790    2480 log.go:172] (0xc000116fd0) Data frame received for 3\nI0814 11:47:38.376821    2480 log.go:172] (0xc000a46000) (3) Data frame handling\nI0814 11:47:38.376833    2480 log.go:172] (0xc000a46000) (3) Data frame sent\nI0814 11:47:38.376853    2480 log.go:172] (0xc000116fd0) Data frame received for 5\nI0814 11:47:38.376861    2480 log.go:172] (0xc000a460a0) (5) Data frame handling\nI0814 11:47:38.376919    2480 log.go:172] (0xc000116fd0) Data frame received for 3\nI0814 11:47:38.376937    2480 log.go:172] (0xc000a46000) (3) Data frame handling\nI0814 11:47:38.378543    2480 log.go:172] (0xc000116fd0) Data frame received for 1\nI0814 11:47:38.378561    2480 log.go:172] (0xc00072abe0) (1) Data frame handling\nI0814 11:47:38.378571    2480 log.go:172] (0xc00072abe0) (1) Data frame sent\nI0814 11:47:38.378585    2480 log.go:172] (0xc000116fd0) (0xc00072abe0) Stream removed, broadcasting: 1\nI0814 11:47:38.378678    2480 log.go:172] (0xc000116fd0) Go away received\nI0814 11:47:38.378897    2480 log.go:172] (0xc000116fd0) (0xc00072abe0) Stream removed, broadcasting: 1\nI0814 11:47:38.378915    2480 log.go:172] (0xc000116fd0) (0xc000a46000) Stream removed, broadcasting: 3\nI0814 11:47:38.378923    2480 log.go:172] (0xc000116fd0) (0xc000a460a0) Stream removed, broadcasting: 5\n"
Aug 14 11:47:38.386: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 14 11:47:38.386: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 14 11:47:38.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 14 11:47:39.363: INFO: stderr: "I0814 11:47:38.505860    2500 log.go:172] (0xc0006f8a50) (0xc0003a46e0) Create stream\nI0814 11:47:38.505913    2500 log.go:172] (0xc0006f8a50) (0xc0003a46e0) Stream added, broadcasting: 1\nI0814 11:47:38.509313    2500 log.go:172] (0xc0006f8a50) Reply frame received for 1\nI0814 11:47:38.509363    2500 log.go:172] (0xc0006f8a50) (0xc000320000) Create stream\nI0814 11:47:38.509387    2500 log.go:172] (0xc0006f8a50) (0xc000320000) Stream added, broadcasting: 3\nI0814 11:47:38.510279    2500 log.go:172] (0xc0006f8a50) Reply frame received for 3\nI0814 11:47:38.510311    2500 log.go:172] (0xc0006f8a50) (0xc000570280) Create stream\nI0814 11:47:38.510321    2500 log.go:172] (0xc0006f8a50) (0xc000570280) Stream added, broadcasting: 5\nI0814 11:47:38.511226    2500 log.go:172] (0xc0006f8a50) Reply frame received for 5\nI0814 11:47:38.574208    2500 log.go:172] (0xc0006f8a50) Data frame received for 5\nI0814 11:47:38.574235    2500 log.go:172] (0xc000570280) (5) Data frame handling\nI0814 11:47:38.574250    2500 log.go:172] (0xc000570280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0814 11:47:39.352992    2500 log.go:172] (0xc0006f8a50) Data frame received for 5\nI0814 11:47:39.353021    2500 log.go:172] (0xc000570280) (5) Data frame handling\nI0814 11:47:39.353041    2500 log.go:172] (0xc0006f8a50) Data frame received for 3\nI0814 11:47:39.353057    2500 log.go:172] (0xc000320000) (3) Data frame handling\nI0814 11:47:39.353071    2500 log.go:172] (0xc000320000) (3) Data frame sent\nI0814 11:47:39.353079    2500 log.go:172] (0xc0006f8a50) Data frame received for 3\nI0814 11:47:39.353087    2500 log.go:172] (0xc000320000) (3) Data frame handling\nI0814 11:47:39.354752    2500 log.go:172] (0xc0006f8a50) Data frame received for 1\nI0814 11:47:39.354793    2500 log.go:172] (0xc0003a46e0) (1) Data frame handling\nI0814 11:47:39.354826    2500 log.go:172] (0xc0003a46e0) (1) Data frame sent\nI0814 11:47:39.354851    2500 log.go:172] (0xc0006f8a50) (0xc0003a46e0) Stream removed, broadcasting: 1\nI0814 11:47:39.354877    2500 log.go:172] (0xc0006f8a50) Go away received\nI0814 11:47:39.355310    2500 log.go:172] (0xc0006f8a50) (0xc0003a46e0) Stream removed, broadcasting: 1\nI0814 11:47:39.355329    2500 log.go:172] (0xc0006f8a50) (0xc000320000) Stream removed, broadcasting: 3\nI0814 11:47:39.355337    2500 log.go:172] (0xc0006f8a50) (0xc000570280) Stream removed, broadcasting: 5\n"
Aug 14 11:47:39.363: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 14 11:47:39.363: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 14 11:47:39.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 14 11:47:39.777: INFO: stderr: "I0814 11:47:39.674105    2521 log.go:172] (0xc000630420) (0xc000624640) Create stream\nI0814 11:47:39.674176    2521 log.go:172] (0xc000630420) (0xc000624640) Stream added, broadcasting: 1\nI0814 11:47:39.676205    2521 log.go:172] (0xc000630420) Reply frame received for 1\nI0814 11:47:39.676256    2521 log.go:172] (0xc000630420) (0xc0003da320) Create stream\nI0814 11:47:39.676271    2521 log.go:172] (0xc000630420) (0xc0003da320) Stream added, broadcasting: 3\nI0814 11:47:39.677079    2521 log.go:172] (0xc000630420) Reply frame received for 3\nI0814 11:47:39.677112    2521 log.go:172] (0xc000630420) (0xc00070e000) Create stream\nI0814 11:47:39.677124    2521 log.go:172] (0xc000630420) (0xc00070e000) Stream added, broadcasting: 5\nI0814 11:47:39.677743    2521 log.go:172] (0xc000630420) Reply frame received for 5\nI0814 11:47:39.731259    2521 log.go:172] (0xc000630420) Data frame received for 5\nI0814 11:47:39.731297    2521 log.go:172] (0xc00070e000) (5) Data frame handling\nI0814 11:47:39.731324    2521 log.go:172] (0xc00070e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0814 11:47:39.764304    2521 log.go:172] (0xc000630420) Data frame received for 3\nI0814 11:47:39.764356    2521 log.go:172] (0xc0003da320) (3) Data frame handling\nI0814 11:47:39.764370    2521 log.go:172] (0xc0003da320) (3) Data frame sent\nI0814 11:47:39.764377    2521 log.go:172] (0xc000630420) Data frame received for 3\nI0814 11:47:39.764383    2521 log.go:172] (0xc0003da320) (3) Data frame handling\nI0814 11:47:39.764484    2521 log.go:172] (0xc000630420) Data frame received for 5\nI0814 11:47:39.764500    2521 log.go:172] (0xc00070e000) (5) Data frame handling\nI0814 11:47:39.766172    2521 log.go:172] (0xc000630420) Data frame received for 1\nI0814 11:47:39.766188    2521 log.go:172] (0xc000624640) (1) Data frame handling\nI0814 11:47:39.766201    2521 log.go:172] (0xc000624640) (1) Data frame sent\nI0814 11:47:39.766215    2521 log.go:172] (0xc000630420) (0xc000624640) Stream removed, broadcasting: 1\nI0814 11:47:39.766236    2521 log.go:172] (0xc000630420) Go away received\nI0814 11:47:39.766540    2521 log.go:172] (0xc000630420) (0xc000624640) Stream removed, broadcasting: 1\nI0814 11:47:39.766559    2521 log.go:172] (0xc000630420) (0xc0003da320) Stream removed, broadcasting: 3\nI0814 11:47:39.766571    2521 log.go:172] (0xc000630420) (0xc00070e000) Stream removed, broadcasting: 5\n"
Aug 14 11:47:39.777: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 14 11:47:39.777: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 14 11:47:39.777: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 11:47:39.780: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug 14 11:47:50.114: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 11:47:50.114: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 11:47:50.114: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 11:47:50.371: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999399s
Aug 14 11:47:51.375: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975669315s
Aug 14 11:47:52.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971482633s
Aug 14 11:47:53.464: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.912658769s
Aug 14 11:47:54.469: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.882943856s
Aug 14 11:47:55.493: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.877904328s
Aug 14 11:47:56.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.853419331s
Aug 14 11:47:57.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.848434222s
Aug 14 11:47:58.583: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.768998938s
Aug 14 11:47:59.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 763.979643ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5849
Aug 14 11:48:00.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:48:01.172: INFO: stderr: "I0814 11:48:01.091415    2540 log.go:172] (0xc0001168f0) (0xc000350b40) Create stream\nI0814 11:48:01.091465    2540 log.go:172] (0xc0001168f0) (0xc000350b40) Stream added, broadcasting: 1\nI0814 11:48:01.094731    2540 log.go:172] (0xc0001168f0) Reply frame received for 1\nI0814 11:48:01.094793    2540 log.go:172] (0xc0001168f0) (0xc000350280) Create stream\nI0814 11:48:01.094809    2540 log.go:172] (0xc0001168f0) (0xc000350280) Stream added, broadcasting: 3\nI0814 11:48:01.095803    2540 log.go:172] (0xc0001168f0) Reply frame received for 3\nI0814 11:48:01.095847    2540 log.go:172] (0xc0001168f0) (0xc00002e000) Create stream\nI0814 11:48:01.095863    2540 log.go:172] (0xc0001168f0) (0xc00002e000) Stream added, broadcasting: 5\nI0814 11:48:01.096969    2540 log.go:172] (0xc0001168f0) Reply frame received for 5\nI0814 11:48:01.163518    2540 log.go:172] (0xc0001168f0) Data frame received for 3\nI0814 11:48:01.163543    2540 log.go:172] (0xc000350280) (3) Data frame handling\nI0814 11:48:01.163550    2540 log.go:172] (0xc000350280) (3) Data frame sent\nI0814 11:48:01.163571    2540 log.go:172] (0xc0001168f0) Data frame received for 5\nI0814 11:48:01.163594    2540 log.go:172] (0xc00002e000) (5) Data frame handling\nI0814 11:48:01.163603    2540 log.go:172] (0xc00002e000) (5) Data frame sent\nI0814 11:48:01.163607    2540 log.go:172] (0xc0001168f0) Data frame received for 5\nI0814 11:48:01.163611    2540 log.go:172] (0xc00002e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0814 11:48:01.163643    2540 log.go:172] (0xc0001168f0) Data frame received for 3\nI0814 11:48:01.163674    2540 log.go:172] (0xc000350280) (3) Data frame handling\nI0814 11:48:01.165293    2540 log.go:172] (0xc0001168f0) Data frame received for 1\nI0814 11:48:01.165324    2540 log.go:172] (0xc000350b40) (1) Data frame handling\nI0814 11:48:01.165347    2540 log.go:172] (0xc000350b40) (1) Data frame sent\nI0814 11:48:01.165372    2540 log.go:172] (0xc0001168f0) (0xc000350b40) Stream removed, broadcasting: 1\nI0814 11:48:01.165396    2540 log.go:172] (0xc0001168f0) Go away received\nI0814 11:48:01.165772    2540 log.go:172] (0xc0001168f0) (0xc000350b40) Stream removed, broadcasting: 1\nI0814 11:48:01.165798    2540 log.go:172] (0xc0001168f0) (0xc000350280) Stream removed, broadcasting: 3\nI0814 11:48:01.165814    2540 log.go:172] (0xc0001168f0) (0xc00002e000) Stream removed, broadcasting: 5\n"
Aug 14 11:48:01.172: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 14 11:48:01.172: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 14 11:48:01.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:48:01.360: INFO: stderr: "I0814 11:48:01.291016    2560 log.go:172] (0xc00012ae70) (0xc0003e6820) Create stream\nI0814 11:48:01.291075    2560 log.go:172] (0xc00012ae70) (0xc0003e6820) Stream added, broadcasting: 1\nI0814 11:48:01.293852    2560 log.go:172] (0xc00012ae70) Reply frame received for 1\nI0814 11:48:01.293887    2560 log.go:172] (0xc00012ae70) (0xc0003e6000) Create stream\nI0814 11:48:01.293895    2560 log.go:172] (0xc00012ae70) (0xc0003e6000) Stream added, broadcasting: 3\nI0814 11:48:01.294542    2560 log.go:172] (0xc00012ae70) Reply frame received for 3\nI0814 11:48:01.294586    2560 log.go:172] (0xc00012ae70) (0xc0005fa280) Create stream\nI0814 11:48:01.294608    2560 log.go:172] (0xc00012ae70) (0xc0005fa280) Stream added, broadcasting: 5\nI0814 11:48:01.295223    2560 log.go:172] (0xc00012ae70) Reply frame received for 5\nI0814 11:48:01.353267    2560 log.go:172] (0xc00012ae70) Data frame received for 5\nI0814 11:48:01.353309    2560 log.go:172] (0xc0005fa280) (5) Data frame handling\nI0814 11:48:01.353319    2560 log.go:172] (0xc0005fa280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0814 11:48:01.353333    2560 log.go:172] (0xc00012ae70) Data frame received for 3\nI0814 11:48:01.353341    2560 log.go:172] (0xc0003e6000) (3) Data frame handling\nI0814 11:48:01.353352    2560 log.go:172] (0xc0003e6000) (3) Data frame sent\nI0814 11:48:01.353609    2560 log.go:172] (0xc00012ae70) Data frame received for 5\nI0814 11:48:01.353649    2560 log.go:172] (0xc0005fa280) (5) Data frame handling\nI0814 11:48:01.353694    2560 log.go:172] (0xc00012ae70) Data frame received for 3\nI0814 11:48:01.353744    2560 log.go:172] (0xc0003e6000) (3) Data frame handling\nI0814 11:48:01.355281    2560 log.go:172] (0xc00012ae70) Data frame received for 1\nI0814 11:48:01.355299    2560 log.go:172] (0xc0003e6820) (1) Data frame handling\nI0814 11:48:01.355318    2560 log.go:172] (0xc0003e6820) (1) Data frame sent\nI0814 11:48:01.355513    2560 log.go:172] (0xc00012ae70) (0xc0003e6820) Stream removed, broadcasting: 1\nI0814 11:48:01.355671    2560 log.go:172] (0xc00012ae70) Go away received\nI0814 11:48:01.355851    2560 log.go:172] (0xc00012ae70) (0xc0003e6820) Stream removed, broadcasting: 1\nI0814 11:48:01.355871    2560 log.go:172] (0xc00012ae70) (0xc0003e6000) Stream removed, broadcasting: 3\nI0814 11:48:01.355879    2560 log.go:172] (0xc00012ae70) (0xc0005fa280) Stream removed, broadcasting: 5\n"
Aug 14 11:48:01.360: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 14 11:48:01.360: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 14 11:48:01.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:48:02.538: INFO: rc: 137
Aug 14 11:48:02.539: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    I0814 11:48:02.423683    2581 log.go:172] (0xc0003ac630) (0xc0006dec80) Create stream
I0814 11:48:02.423769    2581 log.go:172] (0xc0003ac630) (0xc0006dec80) Stream added, broadcasting: 1
I0814 11:48:02.428502    2581 log.go:172] (0xc0003ac630) Reply frame received for 1
I0814 11:48:02.428564    2581 log.go:172] (0xc0003ac630) (0xc0006de3c0) Create stream
I0814 11:48:02.428582    2581 log.go:172] (0xc0003ac630) (0xc0006de3c0) Stream added, broadcasting: 3
I0814 11:48:02.429497    2581 log.go:172] (0xc0003ac630) Reply frame received for 3
I0814 11:48:02.429536    2581 log.go:172] (0xc0003ac630) (0xc00002c000) Create stream
I0814 11:48:02.429550    2581 log.go:172] (0xc0003ac630) (0xc00002c000) Stream added, broadcasting: 5
I0814 11:48:02.430294    2581 log.go:172] (0xc0003ac630) Reply frame received for 5
I0814 11:48:02.489407    2581 log.go:172] (0xc0003ac630) Data frame received for 5
I0814 11:48:02.489431    2581 log.go:172] (0xc00002c000) (5) Data frame handling
I0814 11:48:02.489445    2581 log.go:172] (0xc00002c000) (5) Data frame sent
I0814 11:48:02.489450    2581 log.go:172] (0xc0003ac630) Data frame received for 5
I0814 11:48:02.489453    2581 log.go:172] (0xc00002c000) (5) Data frame handling
+ mv -v /tmp/index.html /usr/share/nginx/html/
I0814 11:48:02.489468    2581 log.go:172] (0xc0003ac630) Data frame received for 3
I0814 11:48:02.489472    2581 log.go:172] (0xc0006de3c0) (3) Data frame handling
I0814 11:48:02.530489    2581 log.go:172] (0xc0003ac630) Data frame received for 1
I0814 11:48:02.530507    2581 log.go:172] (0xc0006dec80) (1) Data frame handling
I0814 11:48:02.530513    2581 log.go:172] (0xc0006dec80) (1) Data frame sent
I0814 11:48:02.530525    2581 log.go:172] (0xc0003ac630) (0xc0006dec80) Stream removed, broadcasting: 1
I0814 11:48:02.530537    2581 log.go:172] (0xc0003ac630) Go away received
I0814 11:48:02.531046    2581 log.go:172] (0xc0003ac630) (0xc0006dec80) Stream removed, broadcasting: 1
I0814 11:48:02.531069    2581 log.go:172] (0xc0003ac630) (0xc0006de3c0) Stream removed, broadcasting: 3
I0814 11:48:02.531080    2581 log.go:172] (0xc0003ac630) (0xc00002c000) Stream removed, broadcasting: 5
command terminated with exit code 137
 []  0xc0023cee70 exit status 137   true [0xc00139b060 0xc00139b1c0 0xc00139b230] [0xc00139b060 0xc00139b1c0 0xc00139b230] [0xc00139b198 0xc00139b1e8] [0xba7140 0xba7140] 0xc002677aa0 }:
Command stdout:

stderr:
I0814 11:48:02.423683    2581 log.go:172] (0xc0003ac630) (0xc0006dec80) Create stream
I0814 11:48:02.423769    2581 log.go:172] (0xc0003ac630) (0xc0006dec80) Stream added, broadcasting: 1
I0814 11:48:02.428502    2581 log.go:172] (0xc0003ac630) Reply frame received for 1
I0814 11:48:02.428564    2581 log.go:172] (0xc0003ac630) (0xc0006de3c0) Create stream
I0814 11:48:02.428582    2581 log.go:172] (0xc0003ac630) (0xc0006de3c0) Stream added, broadcasting: 3
I0814 11:48:02.429497    2581 log.go:172] (0xc0003ac630) Reply frame received for 3
I0814 11:48:02.429536    2581 log.go:172] (0xc0003ac630) (0xc00002c000) Create stream
I0814 11:48:02.429550    2581 log.go:172] (0xc0003ac630) (0xc00002c000) Stream added, broadcasting: 5
I0814 11:48:02.430294    2581 log.go:172] (0xc0003ac630) Reply frame received for 5
I0814 11:48:02.489407    2581 log.go:172] (0xc0003ac630) Data frame received for 5
I0814 11:48:02.489431    2581 log.go:172] (0xc00002c000) (5) Data frame handling
I0814 11:48:02.489445    2581 log.go:172] (0xc00002c000) (5) Data frame sent
I0814 11:48:02.489450    2581 log.go:172] (0xc0003ac630) Data frame received for 5
I0814 11:48:02.489453    2581 log.go:172] (0xc00002c000) (5) Data frame handling
+ mv -v /tmp/index.html /usr/share/nginx/html/
I0814 11:48:02.489468    2581 log.go:172] (0xc0003ac630) Data frame received for 3
I0814 11:48:02.489472    2581 log.go:172] (0xc0006de3c0) (3) Data frame handling
I0814 11:48:02.530489    2581 log.go:172] (0xc0003ac630) Data frame received for 1
I0814 11:48:02.530507    2581 log.go:172] (0xc0006dec80) (1) Data frame handling
I0814 11:48:02.530513    2581 log.go:172] (0xc0006dec80) (1) Data frame sent
I0814 11:48:02.530525    2581 log.go:172] (0xc0003ac630) (0xc0006dec80) Stream removed, broadcasting: 1
I0814 11:48:02.530537    2581 log.go:172] (0xc0003ac630) Go away received
I0814 11:48:02.531046    2581 log.go:172] (0xc0003ac630) (0xc0006dec80) Stream removed, broadcasting: 1
I0814 11:48:02.531069    2581 log.go:172] (0xc0003ac630) (0xc0006de3c0) Stream removed, broadcasting: 3
I0814 11:48:02.531080    2581 log.go:172] (0xc0003ac630) (0xc00002c000) Stream removed, broadcasting: 5
command terminated with exit code 137

error:
exit status 137
Aug 14 11:48:12.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:48:12.625: INFO: rc: 1
Aug 14 11:48:12.625: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00310d110 exit status 1   true [0xc0009f5ac8 0xc0009f5b68 0xc0009f5c18] [0xc0009f5ac8 0xc0009f5b68 0xc0009f5c18] [0xc0009f5b38 0xc0009f5bf8] [0xba7140 0xba7140] 0xc001e34f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:48:22.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:48:22.725: INFO: rc: 1
Aug 14 11:48:22.725: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00310d1d0 exit status 1   true [0xc0009f5c30 0xc0009f5d40 0xc0009f5ec0] [0xc0009f5c30 0xc0009f5d40 0xc0009f5ec0] [0xc0009f5cb8 0xc0009f5e20] [0xba7140 0xba7140] 0xc003376000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:48:32.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:48:32.819: INFO: rc: 1
Aug 14 11:48:32.820: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000f392f0 exit status 1   true [0xc001b14030 0xc001b14048 0xc001b14060] [0xc001b14030 0xc001b14048 0xc001b14060] [0xc001b14040 0xc001b14058] [0xba7140 0xba7140] 0xc00281d860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:48:42.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:48:43.134: INFO: rc: 1
Aug 14 11:48:43.134: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00310d290 exit status 1   true [0xc0009f5f28 0xc00057c470 0xc00057c670] [0xc0009f5f28 0xc00057c470 0xc00057c670] [0xc00057c3c8 0xc00057c638] [0xba7140 0xba7140] 0xc003376300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:48:53.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:48:53.243: INFO: rc: 1
Aug 14 11:48:53.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029fc090 exit status 1   true [0xc000011670 0xc000011cb0 0xc00038a038] [0xc000011670 0xc000011cb0 0xc00038a038] [0xc000011bd0 0xc00038a000] [0xba7140 0xba7140] 0xc001e34f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:49:03.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:49:03.349: INFO: rc: 1
Aug 14 11:49:03.349: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0026b4090 exit status 1   true [0xc0009f4068 0xc0009f41e0 0xc0009f44e8] [0xc0009f4068 0xc0009f41e0 0xc0009f44e8] [0xc0009f4118 0xc0009f4360] [0xba7140 0xba7140] 0xc0019fa360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:49:13.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:49:13.440: INFO: rc: 1
Aug 14 11:49:13.440: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0026b4150 exit status 1   true [0xc0009f4a88 0xc0009f4b50 0xc0009f4bd8] [0xc0009f4a88 0xc0009f4b50 0xc0009f4bd8] [0xc0009f4b48 0xc0009f4ba0] [0xba7140 0xba7140] 0xc0019fa780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:49:23.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:49:23.547: INFO: rc: 1
Aug 14 11:49:23.547: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f640c0 exit status 1   true [0xc0026d4000 0xc0026d4018 0xc0026d4030] [0xc0026d4000 0xc0026d4018 0xc0026d4030] [0xc0026d4010 0xc0026d4028] [0xba7140 0xba7140] 0xc001e00ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:49:33.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:49:33.639: INFO: rc: 1
Aug 14 11:49:33.639: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029fc180 exit status 1   true [0xc00038a0a8 0xc00038a140 0xc00038a178] [0xc00038a0a8 0xc00038a140 0xc00038a178] [0xc00038a138 0xc00038a168] [0xba7140 0xba7140] 0xc001fb80c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:49:43.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:49:43.750: INFO: rc: 1
Aug 14 11:49:43.750: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029fc240 exit status 1   true [0xc00038a180 0xc00038a1a0 0xc00038bc10] [0xc00038a180 0xc00038a1a0 0xc00038bc10] [0xc00038a190 0xc00038a208] [0xba7140 0xba7140] 0xc001fb83c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:49:53.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:49:53.856: INFO: rc: 1
Aug 14 11:49:53.856: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f64180 exit status 1   true [0xc0026d4038 0xc0026d4058 0xc0026d4070] [0xc0026d4038 0xc0026d4058 0xc0026d4070] [0xc0026d4050 0xc0026d4068] [0xba7140 0xba7140] 0xc001e00e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:50:03.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:50:03.955: INFO: rc: 1
Aug 14 11:50:03.955: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f64270 exit status 1   true [0xc0026d4078 0xc0026d4090 0xc0026d40a8] [0xc0026d4078 0xc0026d4090 0xc0026d40a8] [0xc0026d4088 0xc0026d40a0] [0xba7140 0xba7140] 0xc001e014a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:50:13.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:50:14.055: INFO: rc: 1
Aug 14 11:50:14.055: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029fc360 exit status 1   true [0xc00038bc60 0xc00038bc88 0xc00038bd38] [0xc00038bc60 0xc00038bc88 0xc00038bd38] [0xc00038bc78 0xc00038bd10] [0xba7140 0xba7140] 0xc001fb8720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:50:24.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:50:24.146: INFO: rc: 1
Aug 14 11:50:24.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029fc5d0 exit status 1   true [0xc00038bd58 0xc00038bd90 0xc00038be50] [0xc00038bd58 0xc00038bd90 0xc00038be50] [0xc00038bd78 0xc00038be38] [0xba7140 0xba7140] 0xc001fb8a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:50:34.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:50:34.594: INFO: rc: 1
Aug 14 11:50:34.594: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00127c0f0 exit status 1   true [0xc00139a030 0xc00139a260 0xc00139a470] [0xc00139a030 0xc00139a260 0xc00139a470] [0xc00139a140 0xc00139a3b0] [0xba7140 0xba7140] 0xc002676240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:50:44.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:50:44.693: INFO: rc: 1
Aug 14 11:50:44.693: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00127c1b0 exit status 1   true [0xc00139a488 0xc00139a4e0 0xc00139a518] [0xc00139a488 0xc00139a4e0 0xc00139a518] [0xc00139a4c0 0xc00139a508] [0xba7140 0xba7140] 0xc0026766c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:50:54.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:50:55.259: INFO: rc: 1
Aug 14 11:50:55.259: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0026b40c0 exit status 1   true [0xc000011670 0xc000011cb0 0xc0009f4098] [0xc000011670 0xc000011cb0 0xc0009f4098] [0xc000011bd0 0xc0009f4068] [0xba7140 0xba7140] 0xc001e34f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:51:05.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:51:05.407: INFO: rc: 1
Aug 14 11:51:05.407: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029fc0c0 exit status 1   true [0xc00139a030 0xc00139a260 0xc00139a470] [0xc00139a030 0xc00139a260 0xc00139a470] [0xc00139a140 0xc00139a3b0] [0xba7140 0xba7140] 0xc0019fa360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:51:15.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:51:15.496: INFO: rc: 1
Aug 14 11:51:15.496: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00127c120 exit status 1   true [0xc00038a000 0xc00038a130 0xc00038a158] [0xc00038a000 0xc00038a130 0xc00038a158] [0xc00038a0a8 0xc00038a140] [0xba7140 0xba7140] 0xc002676240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:51:25.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:51:25.593: INFO: rc: 1
Aug 14 11:51:25.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0026b41b0 exit status 1   true [0xc0009f4118 0xc0009f4360 0xc0009f4af0] [0xc0009f4118 0xc0009f4360 0xc0009f4af0] [0xc0009f4288 0xc0009f4a88] [0xba7140 0xba7140] 0xc001fb80c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:51:35.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:51:35.771: INFO: rc: 1
Aug 14 11:51:35.771: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f640f0 exit status 1   true [0xc0026d4000 0xc0026d4018 0xc0026d4030] [0xc0026d4000 0xc0026d4018 0xc0026d4030] [0xc0026d4010 0xc0026d4028] [0xba7140 0xba7140] 0xc001e00ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:51:45.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:51:45.872: INFO: rc: 1
Aug 14 11:51:45.872: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f64210 exit status 1   true [0xc0026d4038 0xc0026d4058 0xc0026d4070] [0xc0026d4038 0xc0026d4058 0xc0026d4070] [0xc0026d4050 0xc0026d4068] [0xba7140 0xba7140] 0xc001e00e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:51:55.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:51:56.047: INFO: rc: 1
Aug 14 11:51:56.047: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f64300 exit status 1   true [0xc0026d4078 0xc0026d4090 0xc0026d40a8] [0xc0026d4078 0xc0026d4090 0xc0026d40a8] [0xc0026d4088 0xc0026d40a0] [0xba7140 0xba7140] 0xc001e014a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:52:06.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:52:06.142: INFO: rc: 1
Aug 14 11:52:06.142: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00127c210 exit status 1   true [0xc00038a168 0xc00038a188 0xc00038a1c0] [0xc00038a168 0xc00038a188 0xc00038a1c0] [0xc00038a180 0xc00038a1a0] [0xba7140 0xba7140] 0xc0026766c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:52:16.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:52:16.243: INFO: rc: 1
Aug 14 11:52:16.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f643c0 exit status 1   true [0xc0026d40b0 0xc0026d40d0 0xc0026d40e8] [0xc0026d40b0 0xc0026d40d0 0xc0026d40e8] [0xc0026d40c8 0xc0026d40e0] [0xba7140 0xba7140] 0xc001e01860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:52:26.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:52:26.350: INFO: rc: 1
Aug 14 11:52:26.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f64480 exit status 1   true [0xc0026d4100 0xc0026d4148 0xc0026d4178] [0xc0026d4100 0xc0026d4148 0xc0026d4178] [0xc0026d4130 0xc0026d4168] [0xba7140 0xba7140] 0xc001e01b60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:52:36.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:52:36.463: INFO: rc: 1
Aug 14 11:52:36.463: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029fc210 exit status 1   true [0xc00139a488 0xc00139a4e0 0xc00139a518] [0xc00139a488 0xc00139a4e0 0xc00139a518] [0xc00139a4c0 0xc00139a508] [0xba7140 0xba7140] 0xc0019fa780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:52:46.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:52:46.582: INFO: rc: 1
Aug 14 11:52:46.582: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0026b4300 exit status 1   true [0xc0009f4b48 0xc0009f4ba0 0xc0009f4cf8] [0xc0009f4b48 0xc0009f4ba0 0xc0009f4cf8] [0xc0009f4b60 0xc0009f4c30] [0xba7140 0xba7140] 0xc001fb83c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:52:56.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:52:56.682: INFO: rc: 1
Aug 14 11:52:56.682: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029fc090 exit status 1   true [0xc000011670 0xc000011cb0 0xc00139a0d8] [0xc000011670 0xc000011cb0 0xc00139a0d8] [0xc000011bd0 0xc00139a030] [0xba7140 0xba7140] 0xc001e34f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 11:53:06.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5849 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 14 11:53:06.774: INFO: rc: 1
Aug 14 11:53:06.774: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Aug 14 11:53:06.774: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 14 11:53:06.792: INFO: Deleting all statefulset in ns statefulset-5849
Aug 14 11:53:06.794: INFO: Scaling statefulset ss to 0
Aug 14 11:53:06.799: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 11:53:06.801: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:53:06.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5849" for this suite.
Aug 14 11:53:16.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:53:18.074: INFO: namespace statefulset-5849 deletion completed in 11.253936207s

• [SLOW TEST:413.360 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:53:18.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Aug 14 11:53:18.712: INFO: Waiting up to 5m0s for pod "client-containers-358a0c7d-d6af-4f28-bc53-94b36581b484" in namespace "containers-3283" to be "success or failure"
Aug 14 11:53:18.719: INFO: Pod "client-containers-358a0c7d-d6af-4f28-bc53-94b36581b484": Phase="Pending", Reason="", readiness=false. Elapsed: 7.40668ms
Aug 14 11:53:20.723: INFO: Pod "client-containers-358a0c7d-d6af-4f28-bc53-94b36581b484": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011173405s
Aug 14 11:53:22.780: INFO: Pod "client-containers-358a0c7d-d6af-4f28-bc53-94b36581b484": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06793153s
Aug 14 11:53:24.783: INFO: Pod "client-containers-358a0c7d-d6af-4f28-bc53-94b36581b484": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071616635s
STEP: Saw pod success
Aug 14 11:53:24.784: INFO: Pod "client-containers-358a0c7d-d6af-4f28-bc53-94b36581b484" satisfied condition "success or failure"
Aug 14 11:53:24.786: INFO: Trying to get logs from node iruya-worker pod client-containers-358a0c7d-d6af-4f28-bc53-94b36581b484 container test-container: 
STEP: delete the pod
Aug 14 11:53:24.812: INFO: Waiting for pod client-containers-358a0c7d-d6af-4f28-bc53-94b36581b484 to disappear
Aug 14 11:53:24.865: INFO: Pod client-containers-358a0c7d-d6af-4f28-bc53-94b36581b484 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:53:24.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3283" for this suite.
Aug 14 11:53:30.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:53:30.964: INFO: namespace containers-3283 deletion completed in 6.094246953s

• [SLOW TEST:12.889 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:53:30.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 14 11:53:37.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-2c53f7ef-df3e-478d-b842-53f26a16df2b -c busybox-main-container --namespace=emptydir-1833 -- cat /usr/share/volumeshare/shareddata.txt'
Aug 14 11:53:37.496: INFO: stderr: "I0814 11:53:37.434917    3200 log.go:172] (0xc0009fc210) (0xc000a7ff40) Create stream\nI0814 11:53:37.434957    3200 log.go:172] (0xc0009fc210) (0xc000a7ff40) Stream added, broadcasting: 1\nI0814 11:53:37.437203    3200 log.go:172] (0xc0009fc210) Reply frame received for 1\nI0814 11:53:37.437238    3200 log.go:172] (0xc0009fc210) (0xc000a7e000) Create stream\nI0814 11:53:37.437248    3200 log.go:172] (0xc0009fc210) (0xc000a7e000) Stream added, broadcasting: 3\nI0814 11:53:37.438266    3200 log.go:172] (0xc0009fc210) Reply frame received for 3\nI0814 11:53:37.438297    3200 log.go:172] (0xc0009fc210) (0xc000a7e0a0) Create stream\nI0814 11:53:37.438312    3200 log.go:172] (0xc0009fc210) (0xc000a7e0a0) Stream added, broadcasting: 5\nI0814 11:53:37.438940    3200 log.go:172] (0xc0009fc210) Reply frame received for 5\nI0814 11:53:37.490611    3200 log.go:172] (0xc0009fc210) Data frame received for 5\nI0814 11:53:37.490648    3200 log.go:172] (0xc000a7e0a0) (5) Data frame handling\nI0814 11:53:37.490668    3200 log.go:172] (0xc0009fc210) Data frame received for 3\nI0814 11:53:37.490677    3200 log.go:172] (0xc000a7e000) (3) Data frame handling\nI0814 11:53:37.490689    3200 log.go:172] (0xc000a7e000) (3) Data frame sent\nI0814 11:53:37.490699    3200 log.go:172] (0xc0009fc210) Data frame received for 3\nI0814 11:53:37.490708    3200 log.go:172] (0xc000a7e000) (3) Data frame handling\nI0814 11:53:37.491767    3200 log.go:172] (0xc0009fc210) Data frame received for 1\nI0814 11:53:37.491832    3200 log.go:172] (0xc000a7ff40) (1) Data frame handling\nI0814 11:53:37.491870    3200 log.go:172] (0xc000a7ff40) (1) Data frame sent\nI0814 11:53:37.491908    3200 log.go:172] (0xc0009fc210) (0xc000a7ff40) Stream removed, broadcasting: 1\nI0814 11:53:37.492069    3200 log.go:172] (0xc0009fc210) Go away received\nI0814 11:53:37.492155    3200 log.go:172] (0xc0009fc210) (0xc000a7ff40) Stream removed, broadcasting: 1\nI0814 11:53:37.492170    3200 log.go:172] (0xc0009fc210) (0xc000a7e000) Stream removed, broadcasting: 3\nI0814 11:53:37.492180    3200 log.go:172] (0xc0009fc210) (0xc000a7e0a0) Stream removed, broadcasting: 5\n"
Aug 14 11:53:37.496: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:53:37.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1833" for this suite.
Aug 14 11:53:45.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:53:45.577: INFO: namespace emptydir-1833 deletion completed in 8.078052863s

• [SLOW TEST:14.613 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:53:45.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 14 11:54:00.723: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 11:54:00.728: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 11:54:02.728: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 11:54:02.732: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 11:54:04.728: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 11:54:04.732: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 11:54:06.728: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 11:54:06.732: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 11:54:08.728: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 11:54:08.733: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 11:54:10.728: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 11:54:10.733: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 11:54:12.728: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 11:54:12.731: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 11:54:14.728: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 11:54:14.732: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 11:54:16.728: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 11:54:16.756: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:54:16.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8075" for this suite.
Aug 14 11:54:40.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:54:40.847: INFO: namespace container-lifecycle-hook-8075 deletion completed in 24.086838282s

• [SLOW TEST:55.270 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:54:40.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 14 11:54:40.934: INFO: Waiting up to 5m0s for pod "downward-api-7de29845-172d-4bc1-9631-6c299cc47b01" in namespace "downward-api-4514" to be "success or failure"
Aug 14 11:54:40.943: INFO: Pod "downward-api-7de29845-172d-4bc1-9631-6c299cc47b01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.907977ms
Aug 14 11:54:43.519: INFO: Pod "downward-api-7de29845-172d-4bc1-9631-6c299cc47b01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.585358794s
Aug 14 11:54:45.799: INFO: Pod "downward-api-7de29845-172d-4bc1-9631-6c299cc47b01": Phase="Running", Reason="", readiness=true. Elapsed: 4.864588956s
Aug 14 11:54:47.802: INFO: Pod "downward-api-7de29845-172d-4bc1-9631-6c299cc47b01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.867812201s
STEP: Saw pod success
Aug 14 11:54:47.802: INFO: Pod "downward-api-7de29845-172d-4bc1-9631-6c299cc47b01" satisfied condition "success or failure"
Aug 14 11:54:47.805: INFO: Trying to get logs from node iruya-worker2 pod downward-api-7de29845-172d-4bc1-9631-6c299cc47b01 container dapi-container: 
STEP: delete the pod
Aug 14 11:54:48.434: INFO: Waiting for pod downward-api-7de29845-172d-4bc1-9631-6c299cc47b01 to disappear
Aug 14 11:54:48.437: INFO: Pod downward-api-7de29845-172d-4bc1-9631-6c299cc47b01 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:54:48.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4514" for this suite.
Aug 14 11:54:56.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:54:56.564: INFO: namespace downward-api-4514 deletion completed in 8.087408285s

• [SLOW TEST:15.717 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:54:56.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-d36a9473-abef-49fe-a392-89c64d4bfcf9
STEP: Creating a pod to test consume configMaps
Aug 14 11:54:56.681: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-500f37e9-2bb7-4c47-91de-2fce0baeee67" in namespace "projected-8152" to be "success or failure"
Aug 14 11:54:56.685: INFO: Pod "pod-projected-configmaps-500f37e9-2bb7-4c47-91de-2fce0baeee67": Phase="Pending", Reason="", readiness=false. Elapsed: 3.651451ms
Aug 14 11:54:58.914: INFO: Pod "pod-projected-configmaps-500f37e9-2bb7-4c47-91de-2fce0baeee67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232260213s
Aug 14 11:55:00.919: INFO: Pod "pod-projected-configmaps-500f37e9-2bb7-4c47-91de-2fce0baeee67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237624751s
Aug 14 11:55:02.923: INFO: Pod "pod-projected-configmaps-500f37e9-2bb7-4c47-91de-2fce0baeee67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.24118508s
STEP: Saw pod success
Aug 14 11:55:02.923: INFO: Pod "pod-projected-configmaps-500f37e9-2bb7-4c47-91de-2fce0baeee67" satisfied condition "success or failure"
Aug 14 11:55:02.925: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-500f37e9-2bb7-4c47-91de-2fce0baeee67 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 14 11:55:02.999: INFO: Waiting for pod pod-projected-configmaps-500f37e9-2bb7-4c47-91de-2fce0baeee67 to disappear
Aug 14 11:55:03.008: INFO: Pod pod-projected-configmaps-500f37e9-2bb7-4c47-91de-2fce0baeee67 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:55:03.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8152" for this suite.
Aug 14 11:55:09.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:55:09.135: INFO: namespace projected-8152 deletion completed in 6.123590416s

• [SLOW TEST:12.570 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:55:09.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 14 11:55:17.759: INFO: Successfully updated pod "labelsupdatef77e3426-b3ab-44f0-8781-031ba48b5ca7"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:55:19.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3508" for this suite.
Aug 14 11:55:41.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:55:41.985: INFO: namespace projected-3508 deletion completed in 22.167229368s

• [SLOW TEST:32.848 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:55:41.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0814 11:55:52.094397       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 14 11:55:52.094: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:55:52.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1542" for this suite.
Aug 14 11:55:58.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:55:58.185: INFO: namespace gc-1542 deletion completed in 6.087069085s

• [SLOW TEST:16.200 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:55:58.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-658/secret-test-2319a775-9fdd-4190-8fb7-95504ed179da
STEP: Creating a pod to test consume secrets
Aug 14 11:55:58.755: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a1e2a62-d62d-4845-b073-e2829638797e" in namespace "secrets-658" to be "success or failure"
Aug 14 11:55:58.758: INFO: Pod "pod-configmaps-5a1e2a62-d62d-4845-b073-e2829638797e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.008599ms
Aug 14 11:56:00.762: INFO: Pod "pod-configmaps-5a1e2a62-d62d-4845-b073-e2829638797e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007216933s
Aug 14 11:56:02.767: INFO: Pod "pod-configmaps-5a1e2a62-d62d-4845-b073-e2829638797e": Phase="Running", Reason="", readiness=true. Elapsed: 4.011883579s
Aug 14 11:56:04.771: INFO: Pod "pod-configmaps-5a1e2a62-d62d-4845-b073-e2829638797e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016151888s
STEP: Saw pod success
Aug 14 11:56:04.771: INFO: Pod "pod-configmaps-5a1e2a62-d62d-4845-b073-e2829638797e" satisfied condition "success or failure"
Aug 14 11:56:04.774: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5a1e2a62-d62d-4845-b073-e2829638797e container env-test: 
STEP: delete the pod
Aug 14 11:56:04.824: INFO: Waiting for pod pod-configmaps-5a1e2a62-d62d-4845-b073-e2829638797e to disappear
Aug 14 11:56:04.872: INFO: Pod pod-configmaps-5a1e2a62-d62d-4845-b073-e2829638797e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:56:04.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-658" for this suite.
Aug 14 11:56:10.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:56:10.965: INFO: namespace secrets-658 deletion completed in 6.088755682s

• [SLOW TEST:12.780 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:56:10.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 14 11:56:11.339: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 14 11:56:11.349: INFO: Waiting for terminating namespaces to be deleted...
Aug 14 11:56:11.352: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 14 11:56:11.358: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 14 11:56:11.358: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 14 11:56:11.358: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 14 11:56:11.358: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 14 11:56:11.358: INFO: sprout-686cc64cfb-smjks from ims-p7dpm started at 2020-08-13 08:25:21 +0000 UTC (2 container statuses recorded)
Aug 14 11:56:11.358: INFO: 	Container sprout ready: false, restart count 0
Aug 14 11:56:11.358: INFO: 	Container tailer ready: false, restart count 0
Aug 14 11:56:11.358: INFO: homestead-prov-756c8bff5d-d6lsl from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (1 container statuses recorded)
Aug 14 11:56:11.358: INFO: 	Container homestead-prov ready: false, restart count 0
Aug 14 11:56:11.358: INFO: etcd-5cbf55c8c-k46jp from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (1 container statuses recorded)
Aug 14 11:56:11.358: INFO: 	Container etcd ready: true, restart count 0
Aug 14 11:56:11.358: INFO: cassandra-76f5c4d86c-h2nwg from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (1 container statuses recorded)
Aug 14 11:56:11.358: INFO: 	Container cassandra ready: true, restart count 0
Aug 14 11:56:11.358: INFO: homer-74dd4556d9-ws825 from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (1 container statuses recorded)
Aug 14 11:56:11.358: INFO: 	Container homer ready: true, restart count 0
Aug 14 11:56:11.358: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 14 11:56:11.365: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded)
Aug 14 11:56:11.365: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 14 11:56:11.365: INFO: ellis-57b84b6dd7-xv7nx from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (1 container statuses recorded)
Aug 14 11:56:11.365: INFO: 	Container ellis ready: false, restart count 0
Aug 14 11:56:11.365: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded)
Aug 14 11:56:11.365: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 14 11:56:11.365: INFO: bono-5cdb7bfcdd-rq8q2 from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (2 container statuses recorded)
Aug 14 11:56:11.365: INFO: 	Container bono ready: false, restart count 0
Aug 14 11:56:11.365: INFO: 	Container tailer ready: false, restart count 0
Aug 14 11:56:11.365: INFO: homestead-57586d6cdc-g8qmw from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (2 container statuses recorded)
Aug 14 11:56:11.365: INFO: 	Container homestead ready: false, restart count 398
Aug 14 11:56:11.365: INFO: 	Container tailer ready: true, restart count 0
Aug 14 11:56:11.365: INFO: ralf-57c4654cb8-sctv6 from ims-p7dpm started at 2020-08-13 08:25:17 +0000 UTC (2 container statuses recorded)
Aug 14 11:56:11.365: INFO: 	Container ralf ready: true, restart count 0
Aug 14 11:56:11.365: INFO: 	Container tailer ready: true, restart count 0
Aug 14 11:56:11.365: INFO: astaire-5ddcdd6b7f-hppqv from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (2 container statuses recorded)
Aug 14 11:56:11.365: INFO: 	Container astaire ready: true, restart count 0
Aug 14 11:56:11.365: INFO: 	Container tailer ready: true, restart count 0
Aug 14 11:56:11.365: INFO: chronos-687b9884c5-g8mpr from ims-p7dpm started at 2020-08-13 08:25:16 +0000 UTC (2 container statuses recorded)
Aug 14 11:56:11.365: INFO: 	Container chronos ready: true, restart count 0
Aug 14 11:56:11.365: INFO: 	Container tailer ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Aug 14 11:56:11.463: INFO: Pod astaire-5ddcdd6b7f-hppqv requesting resource cpu=0m on Node iruya-worker2
Aug 14 11:56:11.463: INFO: Pod bono-5cdb7bfcdd-rq8q2 requesting resource cpu=0m on Node iruya-worker2
Aug 14 11:56:11.463: INFO: Pod cassandra-76f5c4d86c-h2nwg requesting resource cpu=0m on Node iruya-worker
Aug 14 11:56:11.463: INFO: Pod chronos-687b9884c5-g8mpr requesting resource cpu=0m on Node iruya-worker2
Aug 14 11:56:11.463: INFO: Pod ellis-57b84b6dd7-xv7nx requesting resource cpu=0m on Node iruya-worker2
Aug 14 11:56:11.463: INFO: Pod etcd-5cbf55c8c-k46jp requesting resource cpu=0m on Node iruya-worker
Aug 14 11:56:11.463: INFO: Pod homer-74dd4556d9-ws825 requesting resource cpu=0m on Node iruya-worker
Aug 14 11:56:11.463: INFO: Pod homestead-57586d6cdc-g8qmw requesting resource cpu=0m on Node iruya-worker2
Aug 14 11:56:11.463: INFO: Pod homestead-prov-756c8bff5d-d6lsl requesting resource cpu=0m on Node iruya-worker
Aug 14 11:56:11.464: INFO: Pod ralf-57c4654cb8-sctv6 requesting resource cpu=0m on Node iruya-worker2
Aug 14 11:56:11.464: INFO: Pod sprout-686cc64cfb-smjks requesting resource cpu=0m on Node iruya-worker
Aug 14 11:56:11.464: INFO: Pod kindnet-8kg9z requesting resource cpu=100m on Node iruya-worker2
Aug 14 11:56:11.464: INFO: Pod kindnet-k7tjm requesting resource cpu=100m on Node iruya-worker
Aug 14 11:56:11.464: INFO: Pod kube-proxy-9ktgx requesting resource cpu=0m on Node iruya-worker2
Aug 14 11:56:11.464: INFO: Pod kube-proxy-jzrnl requesting resource cpu=0m on Node iruya-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-06014f52-a1a1-492e-9405-37a8a1b4a2e8.162b20731d8fa2b8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1549/filler-pod-06014f52-a1a1-492e-9405-37a8a1b4a2e8 to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-06014f52-a1a1-492e-9405-37a8a1b4a2e8.162b2073a788fdf0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-06014f52-a1a1-492e-9405-37a8a1b4a2e8.162b2074793a4ab1], Reason = [Created], Message = [Created container filler-pod-06014f52-a1a1-492e-9405-37a8a1b4a2e8]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-06014f52-a1a1-492e-9405-37a8a1b4a2e8.162b20748ec1db42], Reason = [Started], Message = [Started container filler-pod-06014f52-a1a1-492e-9405-37a8a1b4a2e8]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1a3ad44c-a124-4036-adf5-5d9b1111ceb1.162b20731aa6a1a2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1549/filler-pod-1a3ad44c-a124-4036-adf5-5d9b1111ceb1 to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1a3ad44c-a124-4036-adf5-5d9b1111ceb1.162b2073c36886cc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1a3ad44c-a124-4036-adf5-5d9b1111ceb1.162b207472bb658c], Reason = [Created], Message = [Created container filler-pod-1a3ad44c-a124-4036-adf5-5d9b1111ceb1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1a3ad44c-a124-4036-adf5-5d9b1111ceb1.162b20748e679bec], Reason = [Started], Message = [Started container filler-pod-1a3ad44c-a124-4036-adf5-5d9b1111ceb1]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162b2075056932ea], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:56:20.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1549" for this suite.
Aug 14 11:56:30.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:56:30.953: INFO: namespace sched-pred-1549 deletion completed in 10.080397013s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:19.988 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:56:30.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 14 11:56:31.344: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8509,SelfLink:/api/v1/namespaces/watch-8509/configmaps/e2e-watch-test-watch-closed,UID:aa831187-23ab-497d-8425-a56ca356a993,ResourceVersion:4884287,Generation:0,CreationTimestamp:2020-08-14 11:56:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 14 11:56:31.344: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8509,SelfLink:/api/v1/namespaces/watch-8509/configmaps/e2e-watch-test-watch-closed,UID:aa831187-23ab-497d-8425-a56ca356a993,ResourceVersion:4884288,Generation:0,CreationTimestamp:2020-08-14 11:56:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 14 11:56:31.492: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8509,SelfLink:/api/v1/namespaces/watch-8509/configmaps/e2e-watch-test-watch-closed,UID:aa831187-23ab-497d-8425-a56ca356a993,ResourceVersion:4884289,Generation:0,CreationTimestamp:2020-08-14 11:56:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 14 11:56:31.492: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8509,SelfLink:/api/v1/namespaces/watch-8509/configmaps/e2e-watch-test-watch-closed,UID:aa831187-23ab-497d-8425-a56ca356a993,ResourceVersion:4884290,Generation:0,CreationTimestamp:2020-08-14 11:56:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:56:31.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8509" for this suite.
Aug 14 11:56:39.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:56:39.656: INFO: namespace watch-8509 deletion completed in 8.12262609s

• [SLOW TEST:8.701 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:56:39.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:56:50.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8979" for this suite.
Aug 14 11:57:14.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:57:14.378: INFO: namespace replication-controller-8979 deletion completed in 24.095461716s

• [SLOW TEST:34.722 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:57:14.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 11:57:14.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c23b04ee-9bb7-48be-bc7e-7ee7ddd3c294" in namespace "projected-705" to be "success or failure"
Aug 14 11:57:14.497: INFO: Pod "downwardapi-volume-c23b04ee-9bb7-48be-bc7e-7ee7ddd3c294": Phase="Pending", Reason="", readiness=false. Elapsed: 21.397337ms
Aug 14 11:57:16.500: INFO: Pod "downwardapi-volume-c23b04ee-9bb7-48be-bc7e-7ee7ddd3c294": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024542024s
Aug 14 11:57:18.623: INFO: Pod "downwardapi-volume-c23b04ee-9bb7-48be-bc7e-7ee7ddd3c294": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148013249s
Aug 14 11:57:20.754: INFO: Pod "downwardapi-volume-c23b04ee-9bb7-48be-bc7e-7ee7ddd3c294": Phase="Running", Reason="", readiness=true. Elapsed: 6.278314229s
Aug 14 11:57:22.931: INFO: Pod "downwardapi-volume-c23b04ee-9bb7-48be-bc7e-7ee7ddd3c294": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.455398433s
STEP: Saw pod success
Aug 14 11:57:22.931: INFO: Pod "downwardapi-volume-c23b04ee-9bb7-48be-bc7e-7ee7ddd3c294" satisfied condition "success or failure"
Aug 14 11:57:22.935: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c23b04ee-9bb7-48be-bc7e-7ee7ddd3c294 container client-container: 
STEP: delete the pod
Aug 14 11:57:23.158: INFO: Waiting for pod downwardapi-volume-c23b04ee-9bb7-48be-bc7e-7ee7ddd3c294 to disappear
Aug 14 11:57:23.454: INFO: Pod downwardapi-volume-c23b04ee-9bb7-48be-bc7e-7ee7ddd3c294 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:57:23.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-705" for this suite.
Aug 14 11:57:33.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:57:33.956: INFO: namespace projected-705 deletion completed in 10.499352665s

• [SLOW TEST:19.577 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:57:33.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 11:57:36.106: INFO: Create a RollingUpdate DaemonSet
Aug 14 11:57:36.110: INFO: Check that daemon pods launch on every node of the cluster
Aug 14 11:57:36.425: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:57:36.428: INFO: Number of nodes with available pods: 0
Aug 14 11:57:36.428: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:57:37.803: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:57:38.109: INFO: Number of nodes with available pods: 0
Aug 14 11:57:38.109: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:57:38.701: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:57:38.713: INFO: Number of nodes with available pods: 0
Aug 14 11:57:38.713: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:57:39.431: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:57:39.433: INFO: Number of nodes with available pods: 0
Aug 14 11:57:39.433: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:57:41.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:57:41.515: INFO: Number of nodes with available pods: 0
Aug 14 11:57:41.515: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:57:42.577: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:57:42.580: INFO: Number of nodes with available pods: 0
Aug 14 11:57:42.580: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:57:43.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:57:43.652: INFO: Number of nodes with available pods: 0
Aug 14 11:57:43.652: INFO: Node iruya-worker is running more than one daemon pod
Aug 14 11:57:44.432: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:57:44.435: INFO: Number of nodes with available pods: 2
Aug 14 11:57:44.435: INFO: Number of running nodes: 2, number of available pods: 2
Aug 14 11:57:44.435: INFO: Update the DaemonSet to trigger a rollout
Aug 14 11:57:44.441: INFO: Updating DaemonSet daemon-set
Aug 14 11:57:56.643: INFO: Roll back the DaemonSet before rollout is complete
Aug 14 11:57:56.650: INFO: Updating DaemonSet daemon-set
Aug 14 11:57:56.650: INFO: Make sure DaemonSet rollback is complete
Aug 14 11:57:56.690: INFO: Wrong image for pod: daemon-set-6cgqw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 14 11:57:56.690: INFO: Pod daemon-set-6cgqw is not available
Aug 14 11:57:56.924: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:57:58.042: INFO: Wrong image for pod: daemon-set-6cgqw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 14 11:57:58.042: INFO: Pod daemon-set-6cgqw is not available
Aug 14 11:57:58.045: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:57:58.970: INFO: Wrong image for pod: daemon-set-6cgqw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 14 11:57:58.970: INFO: Pod daemon-set-6cgqw is not available
Aug 14 11:57:58.974: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 14 11:58:00.211: INFO: Pod daemon-set-tmcv8 is not available
Aug 14 11:58:00.255: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7666, will wait for the garbage collector to delete the pods
Aug 14 11:58:00.622: INFO: Deleting DaemonSet.extensions daemon-set took: 299.484449ms
Aug 14 11:58:01.222: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.268865ms
Aug 14 11:58:16.425: INFO: Number of nodes with available pods: 0
Aug 14 11:58:16.425: INFO: Number of running nodes: 0, number of available pods: 0
Aug 14 11:58:16.428: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7666/daemonsets","resourceVersion":"4884612"},"items":null}

Aug 14 11:58:16.430: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7666/pods","resourceVersion":"4884612"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:58:16.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7666" for this suite.
Aug 14 11:58:25.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:58:25.339: INFO: namespace daemonsets-7666 deletion completed in 8.897424151s

• [SLOW TEST:51.383 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:58:25.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 14 11:58:35.996: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 14 11:58:51.119: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:58:51.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4085" for this suite.
Aug 14 11:59:00.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:59:01.743: INFO: namespace pods-4085 deletion completed in 10.303719207s

• [SLOW TEST:36.404 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:59:01.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-0efe476d-5cc1-4e0c-8dad-ab385a84bcec
STEP: Creating a pod to test consume configMaps
Aug 14 11:59:03.525: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2938e7d-b683-4a9b-a287-0dbf3edc3b1b" in namespace "configmap-6007" to be "success or failure"
Aug 14 11:59:03.584: INFO: Pod "pod-configmaps-a2938e7d-b683-4a9b-a287-0dbf3edc3b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.681626ms
Aug 14 11:59:05.589: INFO: Pod "pod-configmaps-a2938e7d-b683-4a9b-a287-0dbf3edc3b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063829006s
Aug 14 11:59:07.821: INFO: Pod "pod-configmaps-a2938e7d-b683-4a9b-a287-0dbf3edc3b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296066463s
Aug 14 11:59:09.838: INFO: Pod "pod-configmaps-a2938e7d-b683-4a9b-a287-0dbf3edc3b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.313500507s
Aug 14 11:59:11.843: INFO: Pod "pod-configmaps-a2938e7d-b683-4a9b-a287-0dbf3edc3b1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.317597963s
STEP: Saw pod success
Aug 14 11:59:11.843: INFO: Pod "pod-configmaps-a2938e7d-b683-4a9b-a287-0dbf3edc3b1b" satisfied condition "success or failure"
Aug 14 11:59:11.846: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-a2938e7d-b683-4a9b-a287-0dbf3edc3b1b container configmap-volume-test: 
STEP: delete the pod
Aug 14 11:59:12.040: INFO: Waiting for pod pod-configmaps-a2938e7d-b683-4a9b-a287-0dbf3edc3b1b to disappear
Aug 14 11:59:12.091: INFO: Pod pod-configmaps-a2938e7d-b683-4a9b-a287-0dbf3edc3b1b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:59:12.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6007" for this suite.
Aug 14 11:59:20.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:59:20.442: INFO: namespace configmap-6007 deletion completed in 8.347475505s

• [SLOW TEST:18.699 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:59:20.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 11:59:22.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b32f0cb-275a-4a4d-bccc-70369556251c" in namespace "downward-api-1916" to be "success or failure"
Aug 14 11:59:22.354: INFO: Pod "downwardapi-volume-4b32f0cb-275a-4a4d-bccc-70369556251c": Phase="Pending", Reason="", readiness=false. Elapsed: 134.55505ms
Aug 14 11:59:24.372: INFO: Pod "downwardapi-volume-4b32f0cb-275a-4a4d-bccc-70369556251c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152694579s
Aug 14 11:59:26.409: INFO: Pod "downwardapi-volume-4b32f0cb-275a-4a4d-bccc-70369556251c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188861199s
Aug 14 11:59:28.420: INFO: Pod "downwardapi-volume-4b32f0cb-275a-4a4d-bccc-70369556251c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.199989406s
STEP: Saw pod success
Aug 14 11:59:28.420: INFO: Pod "downwardapi-volume-4b32f0cb-275a-4a4d-bccc-70369556251c" satisfied condition "success or failure"
Aug 14 11:59:28.422: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4b32f0cb-275a-4a4d-bccc-70369556251c container client-container: 
STEP: delete the pod
Aug 14 11:59:28.620: INFO: Waiting for pod downwardapi-volume-4b32f0cb-275a-4a4d-bccc-70369556251c to disappear
Aug 14 11:59:28.906: INFO: Pod downwardapi-volume-4b32f0cb-275a-4a4d-bccc-70369556251c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:59:28.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1916" for this suite.
Aug 14 11:59:36.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:59:37.240: INFO: namespace downward-api-1916 deletion completed in 8.329708418s

• [SLOW TEST:16.798 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:59:37.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Aug 14 11:59:37.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 14 11:59:37.666: INFO: stderr: ""
Aug 14 11:59:37.666: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:59:37.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1496" for this suite.
Aug 14 11:59:45.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 11:59:45.824: INFO: namespace kubectl-1496 deletion completed in 8.154500218s

• [SLOW TEST:8.584 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 11:59:45.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 14 11:59:45.889: INFO: Waiting up to 5m0s for pod "pod-8d4376d6-a8d3-4652-9632-550965091189" in namespace "emptydir-6968" to be "success or failure"
Aug 14 11:59:45.941: INFO: Pod "pod-8d4376d6-a8d3-4652-9632-550965091189": Phase="Pending", Reason="", readiness=false. Elapsed: 51.558465ms
Aug 14 11:59:48.486: INFO: Pod "pod-8d4376d6-a8d3-4652-9632-550965091189": Phase="Pending", Reason="", readiness=false. Elapsed: 2.596857043s
Aug 14 11:59:50.490: INFO: Pod "pod-8d4376d6-a8d3-4652-9632-550965091189": Phase="Pending", Reason="", readiness=false. Elapsed: 4.600418236s
Aug 14 11:59:52.494: INFO: Pod "pod-8d4376d6-a8d3-4652-9632-550965091189": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.604158942s
STEP: Saw pod success
Aug 14 11:59:52.494: INFO: Pod "pod-8d4376d6-a8d3-4652-9632-550965091189" satisfied condition "success or failure"
Aug 14 11:59:52.496: INFO: Trying to get logs from node iruya-worker2 pod pod-8d4376d6-a8d3-4652-9632-550965091189 container test-container: 
STEP: delete the pod
Aug 14 11:59:56.088: INFO: Waiting for pod pod-8d4376d6-a8d3-4652-9632-550965091189 to disappear
Aug 14 11:59:56.091: INFO: Pod pod-8d4376d6-a8d3-4652-9632-550965091189 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 11:59:56.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6968" for this suite.
Aug 14 12:00:04.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:00:04.597: INFO: namespace emptydir-6968 deletion completed in 8.501216527s

• [SLOW TEST:18.772 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:00:04.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 14 12:00:04.842: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-a,UID:f69e3866-16d0-49dc-a1a7-6d3304ca3b1a,ResourceVersion:4884950,Generation:0,CreationTimestamp:2020-08-14 12:00:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 14 12:00:04.842: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-a,UID:f69e3866-16d0-49dc-a1a7-6d3304ca3b1a,ResourceVersion:4884950,Generation:0,CreationTimestamp:2020-08-14 12:00:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 14 12:00:14.850: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-a,UID:f69e3866-16d0-49dc-a1a7-6d3304ca3b1a,ResourceVersion:4884970,Generation:0,CreationTimestamp:2020-08-14 12:00:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 14 12:00:14.851: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-a,UID:f69e3866-16d0-49dc-a1a7-6d3304ca3b1a,ResourceVersion:4884970,Generation:0,CreationTimestamp:2020-08-14 12:00:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 14 12:00:24.856: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-a,UID:f69e3866-16d0-49dc-a1a7-6d3304ca3b1a,ResourceVersion:4884990,Generation:0,CreationTimestamp:2020-08-14 12:00:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 14 12:00:24.856: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-a,UID:f69e3866-16d0-49dc-a1a7-6d3304ca3b1a,ResourceVersion:4884990,Generation:0,CreationTimestamp:2020-08-14 12:00:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 14 12:00:34.902: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-a,UID:f69e3866-16d0-49dc-a1a7-6d3304ca3b1a,ResourceVersion:4885011,Generation:0,CreationTimestamp:2020-08-14 12:00:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 14 12:00:34.903: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-a,UID:f69e3866-16d0-49dc-a1a7-6d3304ca3b1a,ResourceVersion:4885011,Generation:0,CreationTimestamp:2020-08-14 12:00:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 14 12:00:44.908: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-b,UID:33764446-f722-4e39-a267-b13d3fc0f447,ResourceVersion:4885030,Generation:0,CreationTimestamp:2020-08-14 12:00:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 14 12:00:44.908: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-b,UID:33764446-f722-4e39-a267-b13d3fc0f447,ResourceVersion:4885030,Generation:0,CreationTimestamp:2020-08-14 12:00:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 14 12:00:54.915: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-b,UID:33764446-f722-4e39-a267-b13d3fc0f447,ResourceVersion:4885047,Generation:0,CreationTimestamp:2020-08-14 12:00:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 14 12:00:54.915: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4415,SelfLink:/api/v1/namespaces/watch-4415/configmaps/e2e-watch-test-configmap-b,UID:33764446-f722-4e39-a267-b13d3fc0f447,ResourceVersion:4885047,Generation:0,CreationTimestamp:2020-08-14 12:00:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:01:04.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4415" for this suite.
Aug 14 12:01:13.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:01:18.408: INFO: namespace watch-4415 deletion completed in 13.044824716s

• [SLOW TEST:73.811 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:01:18.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 14 12:01:25.456: INFO: Pod name wrapped-volume-race-c3cf9aad-06b5-4494-8056-9b99f350acd9: Found 0 pods out of 5
Aug 14 12:01:30.774: INFO: Pod name wrapped-volume-race-c3cf9aad-06b5-4494-8056-9b99f350acd9: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c3cf9aad-06b5-4494-8056-9b99f350acd9 in namespace emptydir-wrapper-8068, will wait for the garbage collector to delete the pods
Aug 14 12:02:08.937: INFO: Deleting ReplicationController wrapped-volume-race-c3cf9aad-06b5-4494-8056-9b99f350acd9 took: 441.077227ms
Aug 14 12:02:10.038: INFO: Terminating ReplicationController wrapped-volume-race-c3cf9aad-06b5-4494-8056-9b99f350acd9 pods took: 1.100259323s
STEP: Creating RC which spawns configmap-volume pods
Aug 14 12:03:05.172: INFO: Pod name wrapped-volume-race-fb4ddfc4-5753-4527-967b-338890ddfe6c: Found 0 pods out of 5
Aug 14 12:03:10.254: INFO: Pod name wrapped-volume-race-fb4ddfc4-5753-4527-967b-338890ddfe6c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-fb4ddfc4-5753-4527-967b-338890ddfe6c in namespace emptydir-wrapper-8068, will wait for the garbage collector to delete the pods
Aug 14 12:03:33.022: INFO: Deleting ReplicationController wrapped-volume-race-fb4ddfc4-5753-4527-967b-338890ddfe6c took: 7.635869ms
Aug 14 12:03:33.422: INFO: Terminating ReplicationController wrapped-volume-race-fb4ddfc4-5753-4527-967b-338890ddfe6c pods took: 400.310267ms
STEP: Creating RC which spawns configmap-volume pods
Aug 14 12:04:35.628: INFO: Pod name wrapped-volume-race-e8c48d9f-48ff-4d73-9955-075975fa6a3d: Found 0 pods out of 5
Aug 14 12:04:40.670: INFO: Pod name wrapped-volume-race-e8c48d9f-48ff-4d73-9955-075975fa6a3d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e8c48d9f-48ff-4d73-9955-075975fa6a3d in namespace emptydir-wrapper-8068, will wait for the garbage collector to delete the pods
Aug 14 12:04:59.231: INFO: Deleting ReplicationController wrapped-volume-race-e8c48d9f-48ff-4d73-9955-075975fa6a3d took: 104.583727ms
Aug 14 12:04:59.731: INFO: Terminating ReplicationController wrapped-volume-race-e8c48d9f-48ff-4d73-9955-075975fa6a3d pods took: 500.270589ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:06:06.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8068" for this suite.
Aug 14 12:06:27.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:06:27.222: INFO: namespace emptydir-wrapper-8068 deletion completed in 20.326869902s

• [SLOW TEST:308.814 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:06:27.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Aug 14 12:06:27.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5397'
Aug 14 12:06:50.555: INFO: stderr: ""
Aug 14 12:06:50.555: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 14 12:06:50.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5397'
Aug 14 12:06:50.870: INFO: stderr: ""
Aug 14 12:06:50.870: INFO: stdout: "update-demo-nautilus-d7xt8 update-demo-nautilus-vjdzf "
Aug 14 12:06:50.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d7xt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5397'
Aug 14 12:06:51.112: INFO: stderr: ""
Aug 14 12:06:51.112: INFO: stdout: ""
Aug 14 12:06:51.112: INFO: update-demo-nautilus-d7xt8 is created but not running
Aug 14 12:06:56.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5397'
Aug 14 12:06:56.338: INFO: stderr: ""
Aug 14 12:06:56.338: INFO: stdout: "update-demo-nautilus-d7xt8 update-demo-nautilus-vjdzf "
Aug 14 12:06:56.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d7xt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5397'
Aug 14 12:06:57.126: INFO: stderr: ""
Aug 14 12:06:57.126: INFO: stdout: ""
Aug 14 12:06:57.126: INFO: update-demo-nautilus-d7xt8 is created but not running
Aug 14 12:07:02.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5397'
Aug 14 12:07:02.222: INFO: stderr: ""
Aug 14 12:07:02.222: INFO: stdout: "update-demo-nautilus-d7xt8 update-demo-nautilus-vjdzf "
Aug 14 12:07:02.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d7xt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5397'
Aug 14 12:07:02.331: INFO: stderr: ""
Aug 14 12:07:02.331: INFO: stdout: "true"
Aug 14 12:07:02.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d7xt8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5397'
Aug 14 12:07:02.433: INFO: stderr: ""
Aug 14 12:07:02.433: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 14 12:07:02.433: INFO: validating pod update-demo-nautilus-d7xt8
Aug 14 12:07:02.514: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 14 12:07:02.514: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 14 12:07:02.514: INFO: update-demo-nautilus-d7xt8 is verified up and running
Aug 14 12:07:02.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjdzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5397'
Aug 14 12:07:03.167: INFO: stderr: ""
Aug 14 12:07:03.167: INFO: stdout: "true"
Aug 14 12:07:03.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjdzf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5397'
Aug 14 12:07:03.258: INFO: stderr: ""
Aug 14 12:07:03.258: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 14 12:07:03.258: INFO: validating pod update-demo-nautilus-vjdzf
Aug 14 12:07:03.263: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 14 12:07:03.263: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 14 12:07:03.263: INFO: update-demo-nautilus-vjdzf is verified up and running
STEP: rolling-update to new replication controller
Aug 14 12:07:03.265: INFO: scanned /root for discovery docs: 
Aug 14 12:07:03.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5397'
Aug 14 12:07:32.987: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 14 12:07:32.987: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 14 12:07:32.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5397'
Aug 14 12:07:33.257: INFO: stderr: ""
Aug 14 12:07:33.257: INFO: stdout: "update-demo-kitten-np9cf update-demo-kitten-xtgll "
Aug 14 12:07:33.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-np9cf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5397'
Aug 14 12:07:33.598: INFO: stderr: ""
Aug 14 12:07:33.598: INFO: stdout: "true"
Aug 14 12:07:33.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-np9cf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5397'
Aug 14 12:07:33.699: INFO: stderr: ""
Aug 14 12:07:33.699: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 14 12:07:33.699: INFO: validating pod update-demo-kitten-np9cf
Aug 14 12:07:33.711: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 14 12:07:33.711: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 14 12:07:33.711: INFO: update-demo-kitten-np9cf is verified up and running
Aug 14 12:07:33.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xtgll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5397'
Aug 14 12:07:33.803: INFO: stderr: ""
Aug 14 12:07:33.803: INFO: stdout: "true"
Aug 14 12:07:33.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xtgll -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5397'
Aug 14 12:07:33.892: INFO: stderr: ""
Aug 14 12:07:33.892: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 14 12:07:33.892: INFO: validating pod update-demo-kitten-xtgll
Aug 14 12:07:33.935: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 14 12:07:33.935: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 14 12:07:33.935: INFO: update-demo-kitten-xtgll is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:07:33.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5397" for this suite.
Aug 14 12:08:00.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:08:00.118: INFO: namespace kubectl-5397 deletion completed in 26.179542504s

• [SLOW TEST:92.896 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:08:00.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 14 12:08:00.232: INFO: Waiting up to 5m0s for pod "pod-55db7b1b-14d7-457c-bcb9-8812d59f6d39" in namespace "emptydir-2917" to be "success or failure"
Aug 14 12:08:00.242: INFO: Pod "pod-55db7b1b-14d7-457c-bcb9-8812d59f6d39": Phase="Pending", Reason="", readiness=false. Elapsed: 10.350374ms
Aug 14 12:08:02.890: INFO: Pod "pod-55db7b1b-14d7-457c-bcb9-8812d59f6d39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657891138s
Aug 14 12:08:04.893: INFO: Pod "pod-55db7b1b-14d7-457c-bcb9-8812d59f6d39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.66090316s
Aug 14 12:08:06.897: INFO: Pod "pod-55db7b1b-14d7-457c-bcb9-8812d59f6d39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.665029683s
STEP: Saw pod success
Aug 14 12:08:06.897: INFO: Pod "pod-55db7b1b-14d7-457c-bcb9-8812d59f6d39" satisfied condition "success or failure"
Aug 14 12:08:06.900: INFO: Trying to get logs from node iruya-worker pod pod-55db7b1b-14d7-457c-bcb9-8812d59f6d39 container test-container: 
STEP: delete the pod
Aug 14 12:08:07.112: INFO: Waiting for pod pod-55db7b1b-14d7-457c-bcb9-8812d59f6d39 to disappear
Aug 14 12:08:07.423: INFO: Pod pod-55db7b1b-14d7-457c-bcb9-8812d59f6d39 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:08:07.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2917" for this suite.
Aug 14 12:08:13.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:08:13.891: INFO: namespace emptydir-2917 deletion completed in 6.464206124s

• [SLOW TEST:13.773 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:08:13.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0814 12:08:45.097711       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 14 12:08:45.097: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:08:45.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5096" for this suite.
Aug 14 12:08:55.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:08:55.252: INFO: namespace gc-5096 deletion completed in 10.152500528s

• [SLOW TEST:41.361 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:08:55.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-0c9cb128-50d5-4889-9cfa-5b24a60707a3
STEP: Creating a pod to test consume secrets
Aug 14 12:09:01.221: INFO: Waiting up to 5m0s for pod "pod-secrets-396575bd-c766-43a8-a010-af4ea0568e41" in namespace "secrets-9123" to be "success or failure"
Aug 14 12:09:01.224: INFO: Pod "pod-secrets-396575bd-c766-43a8-a010-af4ea0568e41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896089ms
Aug 14 12:09:03.257: INFO: Pod "pod-secrets-396575bd-c766-43a8-a010-af4ea0568e41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035803294s
Aug 14 12:09:05.406: INFO: Pod "pod-secrets-396575bd-c766-43a8-a010-af4ea0568e41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185185593s
Aug 14 12:09:07.550: INFO: Pod "pod-secrets-396575bd-c766-43a8-a010-af4ea0568e41": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328600598s
Aug 14 12:09:09.581: INFO: Pod "pod-secrets-396575bd-c766-43a8-a010-af4ea0568e41": Phase="Pending", Reason="", readiness=false. Elapsed: 8.359361578s
Aug 14 12:09:11.585: INFO: Pod "pod-secrets-396575bd-c766-43a8-a010-af4ea0568e41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.363586084s
STEP: Saw pod success
Aug 14 12:09:11.585: INFO: Pod "pod-secrets-396575bd-c766-43a8-a010-af4ea0568e41" satisfied condition "success or failure"
Aug 14 12:09:11.588: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-396575bd-c766-43a8-a010-af4ea0568e41 container secret-env-test: 
STEP: delete the pod
Aug 14 12:09:12.394: INFO: Waiting for pod pod-secrets-396575bd-c766-43a8-a010-af4ea0568e41 to disappear
Aug 14 12:09:12.551: INFO: Pod pod-secrets-396575bd-c766-43a8-a010-af4ea0568e41 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:09:12.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9123" for this suite.
Aug 14 12:09:20.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:09:20.696: INFO: namespace secrets-9123 deletion completed in 8.14097487s

• [SLOW TEST:25.444 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:09:20.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:10:08.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2683" for this suite.
Aug 14 12:10:22.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:10:22.403: INFO: namespace container-runtime-2683 deletion completed in 14.350531355s

• [SLOW TEST:61.705 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:10:22.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 14 12:10:22.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4851'
Aug 14 12:10:23.249: INFO: stderr: ""
Aug 14 12:10:23.249: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 14 12:10:23.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4851'
Aug 14 12:10:23.375: INFO: stderr: ""
Aug 14 12:10:23.375: INFO: stdout: "update-demo-nautilus-fq7nk update-demo-nautilus-r72sp "
Aug 14 12:10:23.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq7nk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:23.498: INFO: stderr: ""
Aug 14 12:10:23.498: INFO: stdout: ""
Aug 14 12:10:23.498: INFO: update-demo-nautilus-fq7nk is created but not running
Aug 14 12:10:28.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4851'
Aug 14 12:10:28.604: INFO: stderr: ""
Aug 14 12:10:28.604: INFO: stdout: "update-demo-nautilus-fq7nk update-demo-nautilus-r72sp "
Aug 14 12:10:28.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq7nk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:29.160: INFO: stderr: ""
Aug 14 12:10:29.160: INFO: stdout: ""
Aug 14 12:10:29.160: INFO: update-demo-nautilus-fq7nk is created but not running
Aug 14 12:10:34.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4851'
Aug 14 12:10:34.348: INFO: stderr: ""
Aug 14 12:10:34.348: INFO: stdout: "update-demo-nautilus-fq7nk update-demo-nautilus-r72sp "
Aug 14 12:10:34.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq7nk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:34.439: INFO: stderr: ""
Aug 14 12:10:34.439: INFO: stdout: "true"
Aug 14 12:10:34.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq7nk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:34.607: INFO: stderr: ""
Aug 14 12:10:34.607: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 14 12:10:34.607: INFO: validating pod update-demo-nautilus-fq7nk
Aug 14 12:10:34.612: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 14 12:10:34.612: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 14 12:10:34.612: INFO: update-demo-nautilus-fq7nk is verified up and running
Aug 14 12:10:34.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r72sp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:34.957: INFO: stderr: ""
Aug 14 12:10:34.958: INFO: stdout: "true"
Aug 14 12:10:34.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r72sp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:35.060: INFO: stderr: ""
Aug 14 12:10:35.060: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 14 12:10:35.060: INFO: validating pod update-demo-nautilus-r72sp
Aug 14 12:10:35.064: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 14 12:10:35.064: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 14 12:10:35.064: INFO: update-demo-nautilus-r72sp is verified up and running
STEP: scaling down the replication controller
Aug 14 12:10:35.066: INFO: scanned /root for discovery docs: 
Aug 14 12:10:35.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4851'
Aug 14 12:10:36.275: INFO: stderr: ""
Aug 14 12:10:36.275: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 14 12:10:36.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4851'
Aug 14 12:10:36.374: INFO: stderr: ""
Aug 14 12:10:36.374: INFO: stdout: "update-demo-nautilus-fq7nk update-demo-nautilus-r72sp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 14 12:10:41.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4851'
Aug 14 12:10:41.465: INFO: stderr: ""
Aug 14 12:10:41.465: INFO: stdout: "update-demo-nautilus-fq7nk update-demo-nautilus-r72sp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 14 12:10:46.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4851'
Aug 14 12:10:46.562: INFO: stderr: ""
Aug 14 12:10:46.562: INFO: stdout: "update-demo-nautilus-fq7nk "
Aug 14 12:10:46.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq7nk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:46.650: INFO: stderr: ""
Aug 14 12:10:46.651: INFO: stdout: "true"
Aug 14 12:10:46.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq7nk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:46.796: INFO: stderr: ""
Aug 14 12:10:46.796: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 14 12:10:46.796: INFO: validating pod update-demo-nautilus-fq7nk
Aug 14 12:10:46.799: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 14 12:10:46.799: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 14 12:10:46.799: INFO: update-demo-nautilus-fq7nk is verified up and running
STEP: scaling up the replication controller
Aug 14 12:10:46.800: INFO: scanned /root for discovery docs: 
Aug 14 12:10:46.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4851'
Aug 14 12:10:47.932: INFO: stderr: ""
Aug 14 12:10:47.932: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 14 12:10:47.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4851'
Aug 14 12:10:48.042: INFO: stderr: ""
Aug 14 12:10:48.042: INFO: stdout: "update-demo-nautilus-5khl5 update-demo-nautilus-fq7nk "
Aug 14 12:10:48.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5khl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:48.199: INFO: stderr: ""
Aug 14 12:10:48.199: INFO: stdout: ""
Aug 14 12:10:48.199: INFO: update-demo-nautilus-5khl5 is created but not running
Aug 14 12:10:53.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4851'
Aug 14 12:10:54.678: INFO: stderr: ""
Aug 14 12:10:54.678: INFO: stdout: "update-demo-nautilus-5khl5 update-demo-nautilus-fq7nk "
Aug 14 12:10:54.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5khl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:54.855: INFO: stderr: ""
Aug 14 12:10:54.855: INFO: stdout: "true"
Aug 14 12:10:54.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5khl5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:55.031: INFO: stderr: ""
Aug 14 12:10:55.032: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 14 12:10:55.032: INFO: validating pod update-demo-nautilus-5khl5
Aug 14 12:10:55.036: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 14 12:10:55.036: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 14 12:10:55.036: INFO: update-demo-nautilus-5khl5 is verified up and running
Aug 14 12:10:55.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq7nk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:55.128: INFO: stderr: ""
Aug 14 12:10:55.128: INFO: stdout: "true"
Aug 14 12:10:55.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq7nk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4851'
Aug 14 12:10:55.213: INFO: stderr: ""
Aug 14 12:10:55.213: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 14 12:10:55.213: INFO: validating pod update-demo-nautilus-fq7nk
Aug 14 12:10:55.216: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 14 12:10:55.216: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 14 12:10:55.216: INFO: update-demo-nautilus-fq7nk is verified up and running
STEP: using delete to clean up resources
Aug 14 12:10:55.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4851'
Aug 14 12:10:55.445: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 14 12:10:55.445: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 14 12:10:55.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4851'
Aug 14 12:10:55.728: INFO: stderr: "No resources found.\n"
Aug 14 12:10:55.728: INFO: stdout: ""
Aug 14 12:10:55.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4851 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 14 12:10:55.876: INFO: stderr: ""
Aug 14 12:10:55.876: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:10:55.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4851" for this suite.
Aug 14 12:11:20.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:11:22.192: INFO: namespace kubectl-4851 deletion completed in 26.311855982s

• [SLOW TEST:59.789 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:11:22.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:11:23.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7415" for this suite.
Aug 14 12:11:49.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:11:49.892: INFO: namespace pods-7415 deletion completed in 26.38067765s

• [SLOW TEST:27.699 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:11:49.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 14 12:11:51.131: INFO: Waiting up to 5m0s for pod "pod-0edbb81b-2446-4040-a0e7-62f877bac44a" in namespace "emptydir-1281" to be "success or failure"
Aug 14 12:11:51.413: INFO: Pod "pod-0edbb81b-2446-4040-a0e7-62f877bac44a": Phase="Pending", Reason="", readiness=false. Elapsed: 282.27024ms
Aug 14 12:11:53.418: INFO: Pod "pod-0edbb81b-2446-4040-a0e7-62f877bac44a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286914062s
Aug 14 12:11:55.440: INFO: Pod "pod-0edbb81b-2446-4040-a0e7-62f877bac44a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309063381s
Aug 14 12:11:57.443: INFO: Pod "pod-0edbb81b-2446-4040-a0e7-62f877bac44a": Phase="Running", Reason="", readiness=true. Elapsed: 6.31254085s
Aug 14 12:11:59.581: INFO: Pod "pod-0edbb81b-2446-4040-a0e7-62f877bac44a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.449785225s
STEP: Saw pod success
Aug 14 12:11:59.581: INFO: Pod "pod-0edbb81b-2446-4040-a0e7-62f877bac44a" satisfied condition "success or failure"
Aug 14 12:11:59.583: INFO: Trying to get logs from node iruya-worker2 pod pod-0edbb81b-2446-4040-a0e7-62f877bac44a container test-container: 
STEP: delete the pod
Aug 14 12:11:59.759: INFO: Waiting for pod pod-0edbb81b-2446-4040-a0e7-62f877bac44a to disappear
Aug 14 12:11:59.793: INFO: Pod pod-0edbb81b-2446-4040-a0e7-62f877bac44a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:11:59.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1281" for this suite.
Aug 14 12:12:07.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:12:07.931: INFO: namespace emptydir-1281 deletion completed in 8.134612903s

• [SLOW TEST:18.038 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:12:07.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-0aa7c266-7612-4668-a6ef-1474bf8fe277
STEP: Creating secret with name s-test-opt-upd-12ed0a95-593d-41e9-a285-7c3ec1382b07
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0aa7c266-7612-4668-a6ef-1474bf8fe277
STEP: Updating secret s-test-opt-upd-12ed0a95-593d-41e9-a285-7c3ec1382b07
STEP: Creating secret with name s-test-opt-create-917157f7-c6a9-42b0-8a7e-f16661372a65
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:12:20.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8730" for this suite.
Aug 14 12:12:46.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:12:46.803: INFO: namespace secrets-8730 deletion completed in 26.11508045s

• [SLOW TEST:38.871 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:12:46.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-536
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-536
STEP: Deleting pre-stop pod
Aug 14 12:13:04.339: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:13:04.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-536" for this suite.
Aug 14 12:13:47.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:13:48.005: INFO: namespace prestop-536 deletion completed in 43.270344243s

• [SLOW TEST:61.202 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:13:48.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 12:13:48.154: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e42d23b-42f5-4cc1-8575-19f6c4db0126" in namespace "downward-api-3229" to be "success or failure"
Aug 14 12:13:48.173: INFO: Pod "downwardapi-volume-1e42d23b-42f5-4cc1-8575-19f6c4db0126": Phase="Pending", Reason="", readiness=false. Elapsed: 18.841141ms
Aug 14 12:13:50.256: INFO: Pod "downwardapi-volume-1e42d23b-42f5-4cc1-8575-19f6c4db0126": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10273495s
Aug 14 12:13:52.310: INFO: Pod "downwardapi-volume-1e42d23b-42f5-4cc1-8575-19f6c4db0126": Phase="Running", Reason="", readiness=true. Elapsed: 4.156531139s
Aug 14 12:13:54.314: INFO: Pod "downwardapi-volume-1e42d23b-42f5-4cc1-8575-19f6c4db0126": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160206146s
STEP: Saw pod success
Aug 14 12:13:54.314: INFO: Pod "downwardapi-volume-1e42d23b-42f5-4cc1-8575-19f6c4db0126" satisfied condition "success or failure"
Aug 14 12:13:54.317: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1e42d23b-42f5-4cc1-8575-19f6c4db0126 container client-container: 
STEP: delete the pod
Aug 14 12:13:54.357: INFO: Waiting for pod downwardapi-volume-1e42d23b-42f5-4cc1-8575-19f6c4db0126 to disappear
Aug 14 12:13:54.370: INFO: Pod downwardapi-volume-1e42d23b-42f5-4cc1-8575-19f6c4db0126 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:13:54.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3229" for this suite.
Aug 14 12:14:00.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:14:00.618: INFO: namespace downward-api-3229 deletion completed in 6.243864324s

• [SLOW TEST:12.613 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:14:00.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-28628f4a-a4df-4273-b65a-b4890b390a22
STEP: Creating a pod to test consume secrets
Aug 14 12:14:01.493: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8700cafa-39ec-40d4-952b-78c49ca40ca1" in namespace "projected-3563" to be "success or failure"
Aug 14 12:14:01.562: INFO: Pod "pod-projected-secrets-8700cafa-39ec-40d4-952b-78c49ca40ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 68.362027ms
Aug 14 12:14:03.565: INFO: Pod "pod-projected-secrets-8700cafa-39ec-40d4-952b-78c49ca40ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071788191s
Aug 14 12:14:05.742: INFO: Pod "pod-projected-secrets-8700cafa-39ec-40d4-952b-78c49ca40ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248316871s
Aug 14 12:14:07.745: INFO: Pod "pod-projected-secrets-8700cafa-39ec-40d4-952b-78c49ca40ca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.25138369s
STEP: Saw pod success
Aug 14 12:14:07.745: INFO: Pod "pod-projected-secrets-8700cafa-39ec-40d4-952b-78c49ca40ca1" satisfied condition "success or failure"
Aug 14 12:14:07.747: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-8700cafa-39ec-40d4-952b-78c49ca40ca1 container projected-secret-volume-test: 
STEP: delete the pod
Aug 14 12:14:07.902: INFO: Waiting for pod pod-projected-secrets-8700cafa-39ec-40d4-952b-78c49ca40ca1 to disappear
Aug 14 12:14:08.017: INFO: Pod pod-projected-secrets-8700cafa-39ec-40d4-952b-78c49ca40ca1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:14:08.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3563" for this suite.
Aug 14 12:14:14.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:14:14.091: INFO: namespace projected-3563 deletion completed in 6.068589198s

• [SLOW TEST:13.471 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:14:14.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-60218957-d4ef-486b-aabe-98fe5598d48d in namespace container-probe-288
Aug 14 12:14:20.493: INFO: Started pod test-webserver-60218957-d4ef-486b-aabe-98fe5598d48d in namespace container-probe-288
STEP: checking the pod's current state and verifying that restartCount is present
Aug 14 12:14:20.496: INFO: Initial restart count of pod test-webserver-60218957-d4ef-486b-aabe-98fe5598d48d is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:18:23.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-288" for this suite.
Aug 14 12:18:34.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:18:34.279: INFO: namespace container-probe-288 deletion completed in 10.573400908s

• [SLOW TEST:260.188 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:18:34.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-83b72223-89a4-4b91-9aec-7919305b0da0
STEP: Creating secret with name secret-projected-all-test-volume-c425b411-fd18-4787-acfb-27799ca8030f
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 14 12:18:35.568: INFO: Waiting up to 5m0s for pod "projected-volume-dec0d2a8-c876-44c1-9a68-98e6ea136530" in namespace "projected-9894" to be "success or failure"
Aug 14 12:18:35.986: INFO: Pod "projected-volume-dec0d2a8-c876-44c1-9a68-98e6ea136530": Phase="Pending", Reason="", readiness=false. Elapsed: 417.722742ms
Aug 14 12:18:37.990: INFO: Pod "projected-volume-dec0d2a8-c876-44c1-9a68-98e6ea136530": Phase="Pending", Reason="", readiness=false. Elapsed: 2.421716899s
Aug 14 12:18:39.994: INFO: Pod "projected-volume-dec0d2a8-c876-44c1-9a68-98e6ea136530": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425154274s
Aug 14 12:18:41.997: INFO: Pod "projected-volume-dec0d2a8-c876-44c1-9a68-98e6ea136530": Phase="Running", Reason="", readiness=true. Elapsed: 6.428785078s
Aug 14 12:18:44.001: INFO: Pod "projected-volume-dec0d2a8-c876-44c1-9a68-98e6ea136530": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.432030627s
STEP: Saw pod success
Aug 14 12:18:44.001: INFO: Pod "projected-volume-dec0d2a8-c876-44c1-9a68-98e6ea136530" satisfied condition "success or failure"
Aug 14 12:18:44.003: INFO: Trying to get logs from node iruya-worker pod projected-volume-dec0d2a8-c876-44c1-9a68-98e6ea136530 container projected-all-volume-test: 
STEP: delete the pod
Aug 14 12:18:44.231: INFO: Waiting for pod projected-volume-dec0d2a8-c876-44c1-9a68-98e6ea136530 to disappear
Aug 14 12:18:44.234: INFO: Pod projected-volume-dec0d2a8-c876-44c1-9a68-98e6ea136530 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:18:44.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9894" for this suite.
Aug 14 12:18:50.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:18:50.435: INFO: namespace projected-9894 deletion completed in 6.198558799s

• [SLOW TEST:16.156 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:18:50.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-xbfv
STEP: Creating a pod to test atomic-volume-subpath
Aug 14 12:18:50.586: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xbfv" in namespace "subpath-6213" to be "success or failure"
Aug 14 12:18:50.595: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Pending", Reason="", readiness=false. Elapsed: 9.163238ms
Aug 14 12:18:53.909: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.322637204s
Aug 14 12:18:55.911: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Pending", Reason="", readiness=false. Elapsed: 5.325307964s
Aug 14 12:18:57.915: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Running", Reason="", readiness=true. Elapsed: 7.329152101s
Aug 14 12:19:00.112: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Running", Reason="", readiness=true. Elapsed: 9.526246068s
Aug 14 12:19:02.116: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Running", Reason="", readiness=true. Elapsed: 11.52994308s
Aug 14 12:19:04.544: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Running", Reason="", readiness=true. Elapsed: 13.95779145s
Aug 14 12:19:06.549: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Running", Reason="", readiness=true. Elapsed: 15.962478728s
Aug 14 12:19:08.553: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Running", Reason="", readiness=true. Elapsed: 17.967087407s
Aug 14 12:19:10.558: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Running", Reason="", readiness=true. Elapsed: 19.971485832s
Aug 14 12:19:12.562: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Running", Reason="", readiness=true. Elapsed: 21.975421235s
Aug 14 12:19:14.634: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Running", Reason="", readiness=true. Elapsed: 24.047652174s
Aug 14 12:19:16.637: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Running", Reason="", readiness=true. Elapsed: 26.051188784s
Aug 14 12:19:19.309: INFO: Pod "pod-subpath-test-downwardapi-xbfv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.723221128s
STEP: Saw pod success
Aug 14 12:19:19.309: INFO: Pod "pod-subpath-test-downwardapi-xbfv" satisfied condition "success or failure"
Aug 14 12:19:19.315: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-xbfv container test-container-subpath-downwardapi-xbfv: 
STEP: delete the pod
Aug 14 12:19:19.800: INFO: Waiting for pod pod-subpath-test-downwardapi-xbfv to disappear
Aug 14 12:19:20.333: INFO: Pod pod-subpath-test-downwardapi-xbfv no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-xbfv
Aug 14 12:19:20.333: INFO: Deleting pod "pod-subpath-test-downwardapi-xbfv" in namespace "subpath-6213"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:19:20.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6213" for this suite.
Aug 14 12:19:30.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:19:30.794: INFO: namespace subpath-6213 deletion completed in 10.102301157s

• [SLOW TEST:40.359 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:19:30.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 14 12:19:31.082: INFO: Waiting up to 5m0s for pod "pod-5951d219-4d81-4986-a8f1-719b3d511bbc" in namespace "emptydir-1625" to be "success or failure"
Aug 14 12:19:31.155: INFO: Pod "pod-5951d219-4d81-4986-a8f1-719b3d511bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 72.990664ms
Aug 14 12:19:33.159: INFO: Pod "pod-5951d219-4d81-4986-a8f1-719b3d511bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076883824s
Aug 14 12:19:35.400: INFO: Pod "pod-5951d219-4d81-4986-a8f1-719b3d511bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318322762s
Aug 14 12:19:37.404: INFO: Pod "pod-5951d219-4d81-4986-a8f1-719b3d511bbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.321846085s
STEP: Saw pod success
Aug 14 12:19:37.404: INFO: Pod "pod-5951d219-4d81-4986-a8f1-719b3d511bbc" satisfied condition "success or failure"
Aug 14 12:19:37.406: INFO: Trying to get logs from node iruya-worker pod pod-5951d219-4d81-4986-a8f1-719b3d511bbc container test-container: 
STEP: delete the pod
Aug 14 12:19:37.427: INFO: Waiting for pod pod-5951d219-4d81-4986-a8f1-719b3d511bbc to disappear
Aug 14 12:19:37.432: INFO: Pod pod-5951d219-4d81-4986-a8f1-719b3d511bbc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:19:37.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1625" for this suite.
Aug 14 12:19:45.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:19:45.518: INFO: namespace emptydir-1625 deletion completed in 8.082945662s

• [SLOW TEST:14.723 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:19:45.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:20:46.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4787" for this suite.
Aug 14 12:21:10.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:21:10.812: INFO: namespace container-probe-4787 deletion completed in 24.202294783s

• [SLOW TEST:85.294 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:21:10.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 14 12:21:18.927: INFO: Successfully updated pod "pod-update-activedeadlineseconds-31d1ea0c-8de7-4483-8267-c933f6adc09c"
Aug 14 12:21:18.927: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-31d1ea0c-8de7-4483-8267-c933f6adc09c" in namespace "pods-4941" to be "terminated due to deadline exceeded"
Aug 14 12:21:18.994: INFO: Pod "pod-update-activedeadlineseconds-31d1ea0c-8de7-4483-8267-c933f6adc09c": Phase="Running", Reason="", readiness=true. Elapsed: 67.127439ms
Aug 14 12:21:21.019: INFO: Pod "pod-update-activedeadlineseconds-31d1ea0c-8de7-4483-8267-c933f6adc09c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.091718712s
Aug 14 12:21:21.019: INFO: Pod "pod-update-activedeadlineseconds-31d1ea0c-8de7-4483-8267-c933f6adc09c" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:21:21.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4941" for this suite.
Aug 14 12:21:29.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:21:29.114: INFO: namespace pods-4941 deletion completed in 8.091123068s

• [SLOW TEST:18.301 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:21:29.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-16c07cb9-bf02-4e4c-acc8-bbca1b81a4bd
STEP: Creating a pod to test consume secrets
Aug 14 12:21:29.204: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-18837ce8-839e-4e21-a70f-18a4f4a633cc" in namespace "projected-3668" to be "success or failure"
Aug 14 12:21:29.270: INFO: Pod "pod-projected-secrets-18837ce8-839e-4e21-a70f-18a4f4a633cc": Phase="Pending", Reason="", readiness=false. Elapsed: 66.420996ms
Aug 14 12:21:31.275: INFO: Pod "pod-projected-secrets-18837ce8-839e-4e21-a70f-18a4f4a633cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07116604s
Aug 14 12:21:33.279: INFO: Pod "pod-projected-secrets-18837ce8-839e-4e21-a70f-18a4f4a633cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075530299s
Aug 14 12:21:35.284: INFO: Pod "pod-projected-secrets-18837ce8-839e-4e21-a70f-18a4f4a633cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080057327s
STEP: Saw pod success
Aug 14 12:21:35.284: INFO: Pod "pod-projected-secrets-18837ce8-839e-4e21-a70f-18a4f4a633cc" satisfied condition "success or failure"
Aug 14 12:21:35.287: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-18837ce8-839e-4e21-a70f-18a4f4a633cc container projected-secret-volume-test: 
STEP: delete the pod
Aug 14 12:21:35.373: INFO: Waiting for pod pod-projected-secrets-18837ce8-839e-4e21-a70f-18a4f4a633cc to disappear
Aug 14 12:21:35.552: INFO: Pod pod-projected-secrets-18837ce8-839e-4e21-a70f-18a4f4a633cc no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:21:35.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3668" for this suite.
Aug 14 12:21:41.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:21:41.690: INFO: namespace projected-3668 deletion completed in 6.133783085s

• [SLOW TEST:12.575 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:21:41.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-5045
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5045 to expose endpoints map[]
Aug 14 12:21:43.194: INFO: successfully validated that service multi-endpoint-test in namespace services-5045 exposes endpoints map[] (395.794463ms elapsed)
STEP: Creating pod pod1 in namespace services-5045
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5045 to expose endpoints map[pod1:[100]]
Aug 14 12:21:48.081: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.40018249s elapsed, will retry)
Aug 14 12:21:50.232: INFO: successfully validated that service multi-endpoint-test in namespace services-5045 exposes endpoints map[pod1:[100]] (6.551322498s elapsed)
STEP: Creating pod pod2 in namespace services-5045
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5045 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 14 12:21:54.791: INFO: successfully validated that service multi-endpoint-test in namespace services-5045 exposes endpoints map[pod1:[100] pod2:[101]] (4.555046274s elapsed)
STEP: Deleting pod pod1 in namespace services-5045
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5045 to expose endpoints map[pod2:[101]]
Aug 14 12:21:56.445: INFO: successfully validated that service multi-endpoint-test in namespace services-5045 exposes endpoints map[pod2:[101]] (1.649688402s elapsed)
STEP: Deleting pod pod2 in namespace services-5045
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5045 to expose endpoints map[]
Aug 14 12:21:59.208: INFO: successfully validated that service multi-endpoint-test in namespace services-5045 exposes endpoints map[] (2.758401504s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:22:00.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5045" for this suite.
Aug 14 12:22:26.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:22:26.495: INFO: namespace services-5045 deletion completed in 26.126482422s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:44.805 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:22:26.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-44a7622c-d564-43c6-8497-89a9a3fe8000
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:22:26.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3935" for this suite.
Aug 14 12:22:33.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:22:33.172: INFO: namespace secrets-3935 deletion completed in 6.114321164s

• [SLOW TEST:6.677 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:22:33.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-37a82424-b4e4-4cfd-a5df-c534ea6b5c80 in namespace container-probe-1414
Aug 14 12:22:43.454: INFO: Started pod busybox-37a82424-b4e4-4cfd-a5df-c534ea6b5c80 in namespace container-probe-1414
STEP: checking the pod's current state and verifying that restartCount is present
Aug 14 12:22:43.456: INFO: Initial restart count of pod busybox-37a82424-b4e4-4cfd-a5df-c534ea6b5c80 is 0
Aug 14 12:23:36.242: INFO: Restart count of pod container-probe-1414/busybox-37a82424-b4e4-4cfd-a5df-c534ea6b5c80 is now 1 (52.786744298s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:23:36.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1414" for this suite.
Aug 14 12:23:42.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:23:42.938: INFO: namespace container-probe-1414 deletion completed in 6.438365208s

• [SLOW TEST:69.766 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:23:42.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 14 12:23:43.087: INFO: Waiting up to 5m0s for pod "pod-7e02d217-09d0-4937-961c-7944b9e8b4c6" in namespace "emptydir-8685" to be "success or failure"
Aug 14 12:23:43.094: INFO: Pod "pod-7e02d217-09d0-4937-961c-7944b9e8b4c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413918ms
Aug 14 12:23:45.488: INFO: Pod "pod-7e02d217-09d0-4937-961c-7944b9e8b4c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401026358s
Aug 14 12:23:47.491: INFO: Pod "pod-7e02d217-09d0-4937-961c-7944b9e8b4c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404052199s
Aug 14 12:23:49.600: INFO: Pod "pod-7e02d217-09d0-4937-961c-7944b9e8b4c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.513329263s
Aug 14 12:23:51.812: INFO: Pod "pod-7e02d217-09d0-4937-961c-7944b9e8b4c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724629437s
Aug 14 12:23:53.815: INFO: Pod "pod-7e02d217-09d0-4937-961c-7944b9e8b4c6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.727717612s
Aug 14 12:23:55.902: INFO: Pod "pod-7e02d217-09d0-4937-961c-7944b9e8b4c6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.814458149s
Aug 14 12:23:58.151: INFO: Pod "pod-7e02d217-09d0-4937-961c-7944b9e8b4c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.064185274s
STEP: Saw pod success
Aug 14 12:23:58.151: INFO: Pod "pod-7e02d217-09d0-4937-961c-7944b9e8b4c6" satisfied condition "success or failure"
Aug 14 12:23:58.386: INFO: Trying to get logs from node iruya-worker2 pod pod-7e02d217-09d0-4937-961c-7944b9e8b4c6 container test-container: 
STEP: delete the pod
Aug 14 12:23:58.452: INFO: Waiting for pod pod-7e02d217-09d0-4937-961c-7944b9e8b4c6 to disappear
Aug 14 12:23:58.727: INFO: Pod pod-7e02d217-09d0-4937-961c-7944b9e8b4c6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:23:58.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8685" for this suite.
Aug 14 12:24:05.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:24:05.375: INFO: namespace emptydir-8685 deletion completed in 6.641681145s

• [SLOW TEST:22.437 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:24:05.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Aug 14 12:24:05.547: INFO: Waiting up to 5m0s for pod "client-containers-cd495175-29d5-484e-905b-ea0f09029dd5" in namespace "containers-3924" to be "success or failure"
Aug 14 12:24:05.558: INFO: Pod "client-containers-cd495175-29d5-484e-905b-ea0f09029dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.773561ms
Aug 14 12:24:07.562: INFO: Pod "client-containers-cd495175-29d5-484e-905b-ea0f09029dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015042114s
Aug 14 12:24:09.566: INFO: Pod "client-containers-cd495175-29d5-484e-905b-ea0f09029dd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01914138s
STEP: Saw pod success
Aug 14 12:24:09.566: INFO: Pod "client-containers-cd495175-29d5-484e-905b-ea0f09029dd5" satisfied condition "success or failure"
Aug 14 12:24:09.568: INFO: Trying to get logs from node iruya-worker pod client-containers-cd495175-29d5-484e-905b-ea0f09029dd5 container test-container: 
STEP: delete the pod
Aug 14 12:24:09.619: INFO: Waiting for pod client-containers-cd495175-29d5-484e-905b-ea0f09029dd5 to disappear
Aug 14 12:24:09.632: INFO: Pod client-containers-cd495175-29d5-484e-905b-ea0f09029dd5 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:24:09.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3924" for this suite.
Aug 14 12:24:15.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:24:15.736: INFO: namespace containers-3924 deletion completed in 6.100185064s

• [SLOW TEST:10.360 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:24:15.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-a39c2185-efb2-47ed-ad0d-d9d169973571
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:24:22.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1119" for this suite.
Aug 14 12:24:44.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:24:44.145: INFO: namespace configmap-1119 deletion completed in 22.122233942s

• [SLOW TEST:28.408 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:24:44.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 12:24:44.579: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eecda5b6-b51d-43e6-aea3-b7260f55d832" in namespace "downward-api-7252" to be "success or failure"
Aug 14 12:24:44.639: INFO: Pod "downwardapi-volume-eecda5b6-b51d-43e6-aea3-b7260f55d832": Phase="Pending", Reason="", readiness=false. Elapsed: 60.050203ms
Aug 14 12:24:46.643: INFO: Pod "downwardapi-volume-eecda5b6-b51d-43e6-aea3-b7260f55d832": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064200117s
Aug 14 12:24:48.647: INFO: Pod "downwardapi-volume-eecda5b6-b51d-43e6-aea3-b7260f55d832": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068585922s
Aug 14 12:24:50.652: INFO: Pod "downwardapi-volume-eecda5b6-b51d-43e6-aea3-b7260f55d832": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073089948s
STEP: Saw pod success
Aug 14 12:24:50.652: INFO: Pod "downwardapi-volume-eecda5b6-b51d-43e6-aea3-b7260f55d832" satisfied condition "success or failure"
Aug 14 12:24:50.655: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-eecda5b6-b51d-43e6-aea3-b7260f55d832 container client-container: 
STEP: delete the pod
Aug 14 12:24:50.688: INFO: Waiting for pod downwardapi-volume-eecda5b6-b51d-43e6-aea3-b7260f55d832 to disappear
Aug 14 12:24:50.697: INFO: Pod downwardapi-volume-eecda5b6-b51d-43e6-aea3-b7260f55d832 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:24:50.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7252" for this suite.
Aug 14 12:24:56.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:24:56.852: INFO: namespace downward-api-7252 deletion completed in 6.15060353s

• [SLOW TEST:12.706 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:24:56.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-ce83207d-105a-4868-91b9-daf4e485be66
STEP: Creating a pod to test consume secrets
Aug 14 12:24:56.994: INFO: Waiting up to 5m0s for pod "pod-secrets-a0491b67-5f1d-4de9-bb3f-690d4eb9f481" in namespace "secrets-6861" to be "success or failure"
Aug 14 12:24:56.999: INFO: Pod "pod-secrets-a0491b67-5f1d-4de9-bb3f-690d4eb9f481": Phase="Pending", Reason="", readiness=false. Elapsed: 4.668306ms
Aug 14 12:24:59.003: INFO: Pod "pod-secrets-a0491b67-5f1d-4de9-bb3f-690d4eb9f481": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008738615s
Aug 14 12:25:01.006: INFO: Pod "pod-secrets-a0491b67-5f1d-4de9-bb3f-690d4eb9f481": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01231838s
Aug 14 12:25:03.011: INFO: Pod "pod-secrets-a0491b67-5f1d-4de9-bb3f-690d4eb9f481": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016757938s
Aug 14 12:25:05.071: INFO: Pod "pod-secrets-a0491b67-5f1d-4de9-bb3f-690d4eb9f481": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076727939s
STEP: Saw pod success
Aug 14 12:25:05.071: INFO: Pod "pod-secrets-a0491b67-5f1d-4de9-bb3f-690d4eb9f481" satisfied condition "success or failure"
Aug 14 12:25:05.074: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-a0491b67-5f1d-4de9-bb3f-690d4eb9f481 container secret-volume-test: 
STEP: delete the pod
Aug 14 12:25:05.131: INFO: Waiting for pod pod-secrets-a0491b67-5f1d-4de9-bb3f-690d4eb9f481 to disappear
Aug 14 12:25:05.244: INFO: Pod pod-secrets-a0491b67-5f1d-4de9-bb3f-690d4eb9f481 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:25:05.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6861" for this suite.
Aug 14 12:25:13.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:25:13.802: INFO: namespace secrets-6861 deletion completed in 8.554505973s

• [SLOW TEST:16.950 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:25:13.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 12:25:14.348: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 14 12:25:16.698: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:25:18.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1803" for this suite.
Aug 14 12:25:28.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:25:28.779: INFO: namespace replication-controller-1803 deletion completed in 10.595505097s

• [SLOW TEST:14.977 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:25:28.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Aug 14 12:25:29.271: INFO: Waiting up to 5m0s for pod "var-expansion-412a3c38-a043-464c-9921-e542824d29b2" in namespace "var-expansion-3133" to be "success or failure"
Aug 14 12:25:29.478: INFO: Pod "var-expansion-412a3c38-a043-464c-9921-e542824d29b2": Phase="Pending", Reason="", readiness=false. Elapsed: 206.7573ms
Aug 14 12:25:31.693: INFO: Pod "var-expansion-412a3c38-a043-464c-9921-e542824d29b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422354685s
Aug 14 12:25:33.698: INFO: Pod "var-expansion-412a3c38-a043-464c-9921-e542824d29b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.427249166s
Aug 14 12:25:35.702: INFO: Pod "var-expansion-412a3c38-a043-464c-9921-e542824d29b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.431375094s
STEP: Saw pod success
Aug 14 12:25:35.702: INFO: Pod "var-expansion-412a3c38-a043-464c-9921-e542824d29b2" satisfied condition "success or failure"
Aug 14 12:25:35.706: INFO: Trying to get logs from node iruya-worker pod var-expansion-412a3c38-a043-464c-9921-e542824d29b2 container dapi-container: 
STEP: delete the pod
Aug 14 12:25:35.757: INFO: Waiting for pod var-expansion-412a3c38-a043-464c-9921-e542824d29b2 to disappear
Aug 14 12:25:35.789: INFO: Pod var-expansion-412a3c38-a043-464c-9921-e542824d29b2 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:25:35.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3133" for this suite.
Aug 14 12:25:42.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:25:42.467: INFO: namespace var-expansion-3133 deletion completed in 6.673564189s

• [SLOW TEST:13.687 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:25:42.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-25e34d01-20ad-419a-8f1c-ffb8b0f15414
STEP: Creating a pod to test consume secrets
Aug 14 12:25:42.590: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e0ba2e0d-4485-4a76-813e-016422af89e7" in namespace "projected-5395" to be "success or failure"
Aug 14 12:25:42.646: INFO: Pod "pod-projected-secrets-e0ba2e0d-4485-4a76-813e-016422af89e7": Phase="Pending", Reason="", readiness=false. Elapsed: 55.359226ms
Aug 14 12:25:44.650: INFO: Pod "pod-projected-secrets-e0ba2e0d-4485-4a76-813e-016422af89e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059677797s
Aug 14 12:25:46.653: INFO: Pod "pod-projected-secrets-e0ba2e0d-4485-4a76-813e-016422af89e7": Phase="Running", Reason="", readiness=true. Elapsed: 4.06298822s
Aug 14 12:25:48.657: INFO: Pod "pod-projected-secrets-e0ba2e0d-4485-4a76-813e-016422af89e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0671096s
STEP: Saw pod success
Aug 14 12:25:48.657: INFO: Pod "pod-projected-secrets-e0ba2e0d-4485-4a76-813e-016422af89e7" satisfied condition "success or failure"
Aug 14 12:25:48.660: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-e0ba2e0d-4485-4a76-813e-016422af89e7 container projected-secret-volume-test: 
STEP: delete the pod
Aug 14 12:25:48.705: INFO: Waiting for pod pod-projected-secrets-e0ba2e0d-4485-4a76-813e-016422af89e7 to disappear
Aug 14 12:25:48.717: INFO: Pod pod-projected-secrets-e0ba2e0d-4485-4a76-813e-016422af89e7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:25:48.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5395" for this suite.
Aug 14 12:25:54.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:25:54.897: INFO: namespace projected-5395 deletion completed in 6.176836989s

• [SLOW TEST:12.428 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:25:54.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-cfbbce5a-337e-46cd-b243-e59f22f43331
STEP: Creating a pod to test consume configMaps
Aug 14 12:25:55.323: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3b85a6fd-1010-49aa-991e-a23ec5d7398d" in namespace "projected-1607" to be "success or failure"
Aug 14 12:25:55.346: INFO: Pod "pod-projected-configmaps-3b85a6fd-1010-49aa-991e-a23ec5d7398d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.605504ms
Aug 14 12:25:57.437: INFO: Pod "pod-projected-configmaps-3b85a6fd-1010-49aa-991e-a23ec5d7398d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113328084s
Aug 14 12:25:59.454: INFO: Pod "pod-projected-configmaps-3b85a6fd-1010-49aa-991e-a23ec5d7398d": Phase="Running", Reason="", readiness=true. Elapsed: 4.131082528s
Aug 14 12:26:01.458: INFO: Pod "pod-projected-configmaps-3b85a6fd-1010-49aa-991e-a23ec5d7398d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134897185s
STEP: Saw pod success
Aug 14 12:26:01.458: INFO: Pod "pod-projected-configmaps-3b85a6fd-1010-49aa-991e-a23ec5d7398d" satisfied condition "success or failure"
Aug 14 12:26:01.461: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-3b85a6fd-1010-49aa-991e-a23ec5d7398d container projected-configmap-volume-test: 
STEP: delete the pod
Aug 14 12:26:01.485: INFO: Waiting for pod pod-projected-configmaps-3b85a6fd-1010-49aa-991e-a23ec5d7398d to disappear
Aug 14 12:26:01.495: INFO: Pod pod-projected-configmaps-3b85a6fd-1010-49aa-991e-a23ec5d7398d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:26:01.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1607" for this suite.
Aug 14 12:26:07.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:26:07.610: INFO: namespace projected-1607 deletion completed in 6.111552782s

• [SLOW TEST:12.713 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:26:07.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 14 12:26:07.696: INFO: Waiting up to 5m0s for pod "pod-aeb53808-5565-4001-a61a-786a7b65dba0" in namespace "emptydir-63" to be "success or failure"
Aug 14 12:26:07.699: INFO: Pod "pod-aeb53808-5565-4001-a61a-786a7b65dba0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.549829ms
Aug 14 12:26:09.703: INFO: Pod "pod-aeb53808-5565-4001-a61a-786a7b65dba0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007226763s
Aug 14 12:26:11.712: INFO: Pod "pod-aeb53808-5565-4001-a61a-786a7b65dba0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015794721s
Aug 14 12:26:13.716: INFO: Pod "pod-aeb53808-5565-4001-a61a-786a7b65dba0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020576426s
STEP: Saw pod success
Aug 14 12:26:13.716: INFO: Pod "pod-aeb53808-5565-4001-a61a-786a7b65dba0" satisfied condition "success or failure"
Aug 14 12:26:13.719: INFO: Trying to get logs from node iruya-worker pod pod-aeb53808-5565-4001-a61a-786a7b65dba0 container test-container: 
STEP: delete the pod
Aug 14 12:26:13.736: INFO: Waiting for pod pod-aeb53808-5565-4001-a61a-786a7b65dba0 to disappear
Aug 14 12:26:13.756: INFO: Pod pod-aeb53808-5565-4001-a61a-786a7b65dba0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:26:13.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-63" for this suite.
Aug 14 12:26:21.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:26:21.935: INFO: namespace emptydir-63 deletion completed in 8.176252434s

• [SLOW TEST:14.325 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:26:21.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0814 12:26:34.796124       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 14 12:26:34.796: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:26:34.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6159" for this suite.
Aug 14 12:26:50.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:26:50.917: INFO: namespace gc-6159 deletion completed in 16.117877436s

• [SLOW TEST:28.982 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:26:50.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-6f7de2c3-2a7c-44e8-8c46-b04e4768dd1f
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-6f7de2c3-2a7c-44e8-8c46-b04e4768dd1f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:26:57.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4153" for this suite.
Aug 14 12:27:21.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:27:21.604: INFO: namespace configmap-4153 deletion completed in 24.370415492s

• [SLOW TEST:30.686 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:27:21.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 14 12:27:22.440: INFO: namespace kubectl-6525
Aug 14 12:27:22.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6525'
Aug 14 12:27:34.271: INFO: stderr: ""
Aug 14 12:27:34.271: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 14 12:27:35.275: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 12:27:35.276: INFO: Found 0 / 1
Aug 14 12:27:36.275: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 12:27:36.275: INFO: Found 0 / 1
Aug 14 12:27:37.277: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 12:27:37.277: INFO: Found 0 / 1
Aug 14 12:27:38.276: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 12:27:38.276: INFO: Found 0 / 1
Aug 14 12:27:39.324: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 12:27:39.324: INFO: Found 0 / 1
Aug 14 12:27:40.276: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 12:27:40.276: INFO: Found 0 / 1
Aug 14 12:27:41.276: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 12:27:41.276: INFO: Found 1 / 1
Aug 14 12:27:41.276: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 14 12:27:41.279: INFO: Selector matched 1 pods for map[app:redis]
Aug 14 12:27:41.279: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 14 12:27:41.279: INFO: wait on redis-master startup in kubectl-6525 
Aug 14 12:27:41.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s254q redis-master --namespace=kubectl-6525'
Aug 14 12:27:41.395: INFO: stderr: ""
Aug 14 12:27:41.395: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 14 Aug 12:27:39.658 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 Aug 12:27:39.658 # Server started, Redis version 3.2.12\n1:M 14 Aug 12:27:39.658 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 Aug 12:27:39.658 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Aug 14 12:27:41.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6525'
Aug 14 12:27:41.538: INFO: stderr: ""
Aug 14 12:27:41.538: INFO: stdout: "service/rm2 exposed\n"
Aug 14 12:27:41.541: INFO: Service rm2 in namespace kubectl-6525 found.
STEP: exposing service
Aug 14 12:27:43.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6525'
Aug 14 12:27:43.718: INFO: stderr: ""
Aug 14 12:27:43.718: INFO: stdout: "service/rm3 exposed\n"
Aug 14 12:27:43.726: INFO: Service rm3 in namespace kubectl-6525 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:27:45.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6525" for this suite.
Aug 14 12:28:09.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:28:09.900: INFO: namespace kubectl-6525 deletion completed in 24.16622168s

• [SLOW TEST:48.296 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:28:09.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-bc533b7b-ff18-40a5-b039-804ac272dca3
STEP: Creating a pod to test consume configMaps
Aug 14 12:28:10.833: INFO: Waiting up to 5m0s for pod "pod-configmaps-619dd1b5-dc42-449c-903d-ad4ecd276c47" in namespace "configmap-4058" to be "success or failure"
Aug 14 12:28:10.841: INFO: Pod "pod-configmaps-619dd1b5-dc42-449c-903d-ad4ecd276c47": Phase="Pending", Reason="", readiness=false. Elapsed: 8.267812ms
Aug 14 12:28:12.872: INFO: Pod "pod-configmaps-619dd1b5-dc42-449c-903d-ad4ecd276c47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038742019s
Aug 14 12:28:14.888: INFO: Pod "pod-configmaps-619dd1b5-dc42-449c-903d-ad4ecd276c47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05489291s
Aug 14 12:28:16.892: INFO: Pod "pod-configmaps-619dd1b5-dc42-449c-903d-ad4ecd276c47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058832701s
STEP: Saw pod success
Aug 14 12:28:16.892: INFO: Pod "pod-configmaps-619dd1b5-dc42-449c-903d-ad4ecd276c47" satisfied condition "success or failure"
Aug 14 12:28:16.894: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-619dd1b5-dc42-449c-903d-ad4ecd276c47 container configmap-volume-test: 
STEP: delete the pod
Aug 14 12:28:16.950: INFO: Waiting for pod pod-configmaps-619dd1b5-dc42-449c-903d-ad4ecd276c47 to disappear
Aug 14 12:28:16.984: INFO: Pod pod-configmaps-619dd1b5-dc42-449c-903d-ad4ecd276c47 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:28:16.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4058" for this suite.
Aug 14 12:28:23.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:28:23.100: INFO: namespace configmap-4058 deletion completed in 6.111786361s

• [SLOW TEST:13.200 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:28:23.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Aug 14 12:28:23.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8523'
Aug 14 12:28:23.471: INFO: stderr: ""
Aug 14 12:28:23.471: INFO: stdout: "pod/pause created\n"
Aug 14 12:28:23.471: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 14 12:28:23.471: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8523" to be "running and ready"
Aug 14 12:28:23.487: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.723635ms
Aug 14 12:28:25.491: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019955694s
Aug 14 12:28:27.495: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.023838786s
Aug 14 12:28:27.495: INFO: Pod "pause" satisfied condition "running and ready"
Aug 14 12:28:27.495: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 14 12:28:27.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8523'
Aug 14 12:28:27.604: INFO: stderr: ""
Aug 14 12:28:27.604: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 14 12:28:27.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8523'
Aug 14 12:28:27.715: INFO: stderr: ""
Aug 14 12:28:27.715: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 14 12:28:27.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8523'
Aug 14 12:28:27.819: INFO: stderr: ""
Aug 14 12:28:27.819: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 14 12:28:27.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8523'
Aug 14 12:28:27.917: INFO: stderr: ""
Aug 14 12:28:27.917: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Aug 14 12:28:27.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8523'
Aug 14 12:28:28.704: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 14 12:28:28.704: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 14 12:28:28.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8523'
Aug 14 12:28:28.900: INFO: stderr: "No resources found.\n"
Aug 14 12:28:28.900: INFO: stdout: ""
Aug 14 12:28:28.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8523 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 14 12:28:28.987: INFO: stderr: ""
Aug 14 12:28:28.987: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:28:28.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8523" for this suite.
Aug 14 12:28:35.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:28:35.203: INFO: namespace kubectl-8523 deletion completed in 6.2124687s

• [SLOW TEST:12.103 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:28:35.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 14 12:28:35.273: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 14 12:28:40.278: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 14 12:28:40.278: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 14 12:28:40.410: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-6178,SelfLink:/apis/apps/v1/namespaces/deployment-6178/deployments/test-cleanup-deployment,UID:9631dc09-5746-4ac2-aac5-e59d68036105,ResourceVersion:4890407,Generation:1,CreationTimestamp:2020-08-14 12:28:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Aug 14 12:28:40.424: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-6178,SelfLink:/apis/apps/v1/namespaces/deployment-6178/replicasets/test-cleanup-deployment-55bbcbc84c,UID:7ee8c465-f697-47e6-8adf-71c60f6cc8db,ResourceVersion:4890409,Generation:1,CreationTimestamp:2020-08-14 12:28:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 9631dc09-5746-4ac2-aac5-e59d68036105 0xc0030bc067 0xc0030bc068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 14 12:28:40.424: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Aug 14 12:28:40.424: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-6178,SelfLink:/apis/apps/v1/namespaces/deployment-6178/replicasets/test-cleanup-controller,UID:8530ba48-c7e8-48c4-9462-c1ddd5d940fd,ResourceVersion:4890408,Generation:1,CreationTimestamp:2020-08-14 12:28:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 9631dc09-5746-4ac2-aac5-e59d68036105 0xc000545e47 0xc000545e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 14 12:28:40.484: INFO: Pod "test-cleanup-controller-bgrcp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-bgrcp,GenerateName:test-cleanup-controller-,Namespace:deployment-6178,SelfLink:/api/v1/namespaces/deployment-6178/pods/test-cleanup-controller-bgrcp,UID:69b635d9-d851-4d5d-8ca8-afa427d4cd52,ResourceVersion:4890403,Generation:0,CreationTimestamp:2020-08-14 12:28:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 8530ba48-c7e8-48c4-9462-c1ddd5d940fd 0xc0030bc937 0xc0030bc938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvb4c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvb4c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvb4c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030bc9b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030bc9d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 12:28:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 12:28:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 12:28:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 12:28:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.206,StartTime:2020-08-14 12:28:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-14 12:28:38 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a2084653c8e1781cb71e273fd8c1f80ee8357957e9a1102d5370b358ce52c74b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 14 12:28:40.484: INFO: Pod "test-cleanup-deployment-55bbcbc84c-4bpdk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-4bpdk,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-6178,SelfLink:/api/v1/namespaces/deployment-6178/pods/test-cleanup-deployment-55bbcbc84c-4bpdk,UID:fb3efb71-033a-4f58-94b8-198061e72f9e,ResourceVersion:4890415,Generation:0,CreationTimestamp:2020-08-14 12:28:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 7ee8c465-f697-47e6-8adf-71c60f6cc8db 0xc0030bcab7 0xc0030bcab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvb4c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvb4c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-tvb4c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030bcb30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030bcb50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 12:28:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:28:40.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6178" for this suite.
Aug 14 12:28:46.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:28:46.795: INFO: namespace deployment-6178 deletion completed in 6.245371176s

• [SLOW TEST:11.592 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:28:46.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 14 12:28:47.680: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8053bb83-a9f9-491f-b6ab-2fb84e1edcca" in namespace "downward-api-58" to be "success or failure"
Aug 14 12:28:47.705: INFO: Pod "downwardapi-volume-8053bb83-a9f9-491f-b6ab-2fb84e1edcca": Phase="Pending", Reason="", readiness=false. Elapsed: 24.210688ms
Aug 14 12:28:50.161: INFO: Pod "downwardapi-volume-8053bb83-a9f9-491f-b6ab-2fb84e1edcca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.480089516s
Aug 14 12:28:52.272: INFO: Pod "downwardapi-volume-8053bb83-a9f9-491f-b6ab-2fb84e1edcca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.591223372s
Aug 14 12:28:54.276: INFO: Pod "downwardapi-volume-8053bb83-a9f9-491f-b6ab-2fb84e1edcca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.595841436s
STEP: Saw pod success
Aug 14 12:28:54.276: INFO: Pod "downwardapi-volume-8053bb83-a9f9-491f-b6ab-2fb84e1edcca" satisfied condition "success or failure"
Aug 14 12:28:54.279: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8053bb83-a9f9-491f-b6ab-2fb84e1edcca container client-container: 
STEP: delete the pod
Aug 14 12:28:54.325: INFO: Waiting for pod downwardapi-volume-8053bb83-a9f9-491f-b6ab-2fb84e1edcca to disappear
Aug 14 12:28:54.333: INFO: Pod downwardapi-volume-8053bb83-a9f9-491f-b6ab-2fb84e1edcca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:28:54.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-58" for this suite.
Aug 14 12:29:00.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:29:00.445: INFO: namespace downward-api-58 deletion completed in 6.107519579s

• [SLOW TEST:13.649 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:29:00.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Aug 14 12:29:00.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4453 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 14 12:29:04.882: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0814 12:29:04.790623    4390 log.go:172] (0xc00090c2c0) (0xc0002f0640) Create stream\nI0814 12:29:04.790709    4390 log.go:172] (0xc00090c2c0) (0xc0002f0640) Stream added, broadcasting: 1\nI0814 12:29:04.793594    4390 log.go:172] (0xc00090c2c0) Reply frame received for 1\nI0814 12:29:04.793626    4390 log.go:172] (0xc00090c2c0) (0xc0002f06e0) Create stream\nI0814 12:29:04.793635    4390 log.go:172] (0xc00090c2c0) (0xc0002f06e0) Stream added, broadcasting: 3\nI0814 12:29:04.794706    4390 log.go:172] (0xc00090c2c0) Reply frame received for 3\nI0814 12:29:04.794765    4390 log.go:172] (0xc00090c2c0) (0xc0005b4000) Create stream\nI0814 12:29:04.794784    4390 log.go:172] (0xc00090c2c0) (0xc0005b4000) Stream added, broadcasting: 5\nI0814 12:29:04.795759    4390 log.go:172] (0xc00090c2c0) Reply frame received for 5\nI0814 12:29:04.795810    4390 log.go:172] (0xc00090c2c0) (0xc0006741e0) Create stream\nI0814 12:29:04.795824    4390 log.go:172] (0xc00090c2c0) (0xc0006741e0) Stream added, broadcasting: 7\nI0814 12:29:04.797113    4390 log.go:172] (0xc00090c2c0) Reply frame received for 7\nI0814 12:29:04.797320    4390 log.go:172] (0xc0002f06e0) (3) Writing data frame\nI0814 12:29:04.797598    4390 log.go:172] (0xc0002f06e0) (3) Writing data frame\nI0814 12:29:04.798813    4390 log.go:172] (0xc00090c2c0) Data frame received for 5\nI0814 12:29:04.798904    4390 log.go:172] (0xc0005b4000) (5) Data frame handling\nI0814 12:29:04.798930    4390 log.go:172] (0xc0005b4000) (5) Data frame sent\nI0814 12:29:04.799590    4390 log.go:172] (0xc00090c2c0) Data frame received for 5\nI0814 12:29:04.799626    4390 log.go:172] (0xc0005b4000) (5) Data frame handling\nI0814 12:29:04.799664    4390 log.go:172] (0xc0005b4000) (5) Data frame sent\nI0814 12:29:04.855812    4390 log.go:172] (0xc00090c2c0) Data frame received for 5\nI0814 12:29:04.855856    4390 log.go:172] (0xc00090c2c0) Data frame received for 7\nI0814 12:29:04.855889    4390 log.go:172] (0xc0006741e0) (7) Data frame handling\nI0814 12:29:04.855941    4390 log.go:172] (0xc0005b4000) (5) Data frame handling\nI0814 12:29:04.856143    4390 log.go:172] (0xc00090c2c0) Data frame received for 1\nI0814 12:29:04.856172    4390 log.go:172] (0xc0002f0640) (1) Data frame handling\nI0814 12:29:04.856193    4390 log.go:172] (0xc0002f0640) (1) Data frame sent\nI0814 12:29:04.856232    4390 log.go:172] (0xc00090c2c0) (0xc0002f0640) Stream removed, broadcasting: 1\nI0814 12:29:04.856267    4390 log.go:172] (0xc00090c2c0) (0xc0002f06e0) Stream removed, broadcasting: 3\nI0814 12:29:04.856294    4390 log.go:172] (0xc00090c2c0) Go away received\nI0814 12:29:04.856559    4390 log.go:172] (0xc00090c2c0) (0xc0002f0640) Stream removed, broadcasting: 1\nI0814 12:29:04.856602    4390 log.go:172] (0xc00090c2c0) (0xc0002f06e0) Stream removed, broadcasting: 3\nI0814 12:29:04.856622    4390 log.go:172] (0xc00090c2c0) (0xc0005b4000) Stream removed, broadcasting: 5\nI0814 12:29:04.856657    4390 log.go:172] (0xc00090c2c0) (0xc0006741e0) Stream removed, broadcasting: 7\n"
Aug 14 12:29:04.882: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:29:06.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4453" for this suite.
Aug 14 12:29:12.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:29:13.005: INFO: namespace kubectl-4453 deletion completed in 6.108835106s

• [SLOW TEST:12.560 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:29:13.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 14 12:29:13.624: INFO: Waiting up to 5m0s for pod "pod-2180b7ae-d090-40b5-b1ed-ed2ff62e8690" in namespace "emptydir-3658" to be "success or failure"
Aug 14 12:29:13.646: INFO: Pod "pod-2180b7ae-d090-40b5-b1ed-ed2ff62e8690": Phase="Pending", Reason="", readiness=false. Elapsed: 22.066282ms
Aug 14 12:29:15.649: INFO: Pod "pod-2180b7ae-d090-40b5-b1ed-ed2ff62e8690": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025516641s
Aug 14 12:29:17.913: INFO: Pod "pod-2180b7ae-d090-40b5-b1ed-ed2ff62e8690": Phase="Running", Reason="", readiness=true. Elapsed: 4.289679373s
Aug 14 12:29:19.917: INFO: Pod "pod-2180b7ae-d090-40b5-b1ed-ed2ff62e8690": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.293702697s
STEP: Saw pod success
Aug 14 12:29:19.917: INFO: Pod "pod-2180b7ae-d090-40b5-b1ed-ed2ff62e8690" satisfied condition "success or failure"
Aug 14 12:29:19.921: INFO: Trying to get logs from node iruya-worker pod pod-2180b7ae-d090-40b5-b1ed-ed2ff62e8690 container test-container: 
STEP: delete the pod
Aug 14 12:29:20.340: INFO: Waiting for pod pod-2180b7ae-d090-40b5-b1ed-ed2ff62e8690 to disappear
Aug 14 12:29:20.596: INFO: Pod pod-2180b7ae-d090-40b5-b1ed-ed2ff62e8690 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:29:20.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3658" for this suite.
Aug 14 12:29:26.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:29:26.744: INFO: namespace emptydir-3658 deletion completed in 6.143778193s

• [SLOW TEST:13.738 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:29:26.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-393ace9d-4c9b-42b9-94cb-a1b22a775b38
STEP: Creating a pod to test consume configMaps
Aug 14 12:29:26.958: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-298978e1-9d3d-4305-853d-e290d20faf00" in namespace "projected-5883" to be "success or failure"
Aug 14 12:29:26.975: INFO: Pod "pod-projected-configmaps-298978e1-9d3d-4305-853d-e290d20faf00": Phase="Pending", Reason="", readiness=false. Elapsed: 16.666703ms
Aug 14 12:29:28.978: INFO: Pod "pod-projected-configmaps-298978e1-9d3d-4305-853d-e290d20faf00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019748196s
Aug 14 12:29:30.981: INFO: Pod "pod-projected-configmaps-298978e1-9d3d-4305-853d-e290d20faf00": Phase="Running", Reason="", readiness=true. Elapsed: 4.022629514s
Aug 14 12:29:32.984: INFO: Pod "pod-projected-configmaps-298978e1-9d3d-4305-853d-e290d20faf00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025473706s
STEP: Saw pod success
Aug 14 12:29:32.984: INFO: Pod "pod-projected-configmaps-298978e1-9d3d-4305-853d-e290d20faf00" satisfied condition "success or failure"
Aug 14 12:29:32.986: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-298978e1-9d3d-4305-853d-e290d20faf00 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 14 12:29:33.007: INFO: Waiting for pod pod-projected-configmaps-298978e1-9d3d-4305-853d-e290d20faf00 to disappear
Aug 14 12:29:33.029: INFO: Pod pod-projected-configmaps-298978e1-9d3d-4305-853d-e290d20faf00 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:29:33.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5883" for this suite.
Aug 14 12:29:39.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:29:39.141: INFO: namespace projected-5883 deletion completed in 6.10894017s

• [SLOW TEST:12.397 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:29:39.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-73b118e8-16d2-44a9-96b9-56c28f86203a
STEP: Creating a pod to test consume configMaps
Aug 14 12:29:39.323: INFO: Waiting up to 5m0s for pod "pod-configmaps-9152695f-7efb-4456-b008-2b875341bd37" in namespace "configmap-4022" to be "success or failure"
Aug 14 12:29:39.364: INFO: Pod "pod-configmaps-9152695f-7efb-4456-b008-2b875341bd37": Phase="Pending", Reason="", readiness=false. Elapsed: 40.872856ms
Aug 14 12:29:41.367: INFO: Pod "pod-configmaps-9152695f-7efb-4456-b008-2b875341bd37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044087035s
Aug 14 12:29:43.580: INFO: Pod "pod-configmaps-9152695f-7efb-4456-b008-2b875341bd37": Phase="Running", Reason="", readiness=true. Elapsed: 4.257316052s
Aug 14 12:29:45.584: INFO: Pod "pod-configmaps-9152695f-7efb-4456-b008-2b875341bd37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.261729351s
STEP: Saw pod success
Aug 14 12:29:45.585: INFO: Pod "pod-configmaps-9152695f-7efb-4456-b008-2b875341bd37" satisfied condition "success or failure"
Aug 14 12:29:45.587: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-9152695f-7efb-4456-b008-2b875341bd37 container configmap-volume-test: 
STEP: delete the pod
Aug 14 12:29:45.604: INFO: Waiting for pod pod-configmaps-9152695f-7efb-4456-b008-2b875341bd37 to disappear
Aug 14 12:29:45.608: INFO: Pod pod-configmaps-9152695f-7efb-4456-b008-2b875341bd37 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:29:45.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4022" for this suite.
Aug 14 12:29:51.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:29:51.701: INFO: namespace configmap-4022 deletion completed in 6.088301165s

• [SLOW TEST:12.560 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:29:51.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 14 12:29:51.824: INFO: Waiting up to 5m0s for pod "pod-63f59b31-08b1-4a15-b5fd-b6b286a82e52" in namespace "emptydir-4909" to be "success or failure"
Aug 14 12:29:51.830: INFO: Pod "pod-63f59b31-08b1-4a15-b5fd-b6b286a82e52": Phase="Pending", Reason="", readiness=false. Elapsed: 5.62717ms
Aug 14 12:29:54.506: INFO: Pod "pod-63f59b31-08b1-4a15-b5fd-b6b286a82e52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.682143148s
Aug 14 12:29:56.510: INFO: Pod "pod-63f59b31-08b1-4a15-b5fd-b6b286a82e52": Phase="Running", Reason="", readiness=true. Elapsed: 4.685514917s
Aug 14 12:29:58.513: INFO: Pod "pod-63f59b31-08b1-4a15-b5fd-b6b286a82e52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.689381761s
STEP: Saw pod success
Aug 14 12:29:58.514: INFO: Pod "pod-63f59b31-08b1-4a15-b5fd-b6b286a82e52" satisfied condition "success or failure"
Aug 14 12:29:58.516: INFO: Trying to get logs from node iruya-worker2 pod pod-63f59b31-08b1-4a15-b5fd-b6b286a82e52 container test-container: 
STEP: delete the pod
Aug 14 12:29:58.533: INFO: Waiting for pod pod-63f59b31-08b1-4a15-b5fd-b6b286a82e52 to disappear
Aug 14 12:29:58.544: INFO: Pod pod-63f59b31-08b1-4a15-b5fd-b6b286a82e52 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:29:58.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4909" for this suite.
Aug 14 12:30:04.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:30:04.769: INFO: namespace emptydir-4909 deletion completed in 6.221699354s

• [SLOW TEST:13.068 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 14 12:30:04.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 14 12:30:13.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6828" for this suite.
Aug 14 12:30:19.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 14 12:30:19.208: INFO: namespace kubelet-test-6828 deletion completed in 6.105267768s

• [SLOW TEST:14.439 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSAug 14 12:30:19.208: INFO: Running AfterSuite actions on all nodes
Aug 14 12:30:19.208: INFO: Running AfterSuite actions on node 1
Aug 14 12:30:19.208: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 8754.931 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS