I0215 10:47:15.928270 8 e2e.go:224] Starting e2e run "8769ffb2-4fe0-11ea-960a-0242ac110007" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581763635 - Will randomize all specs Will run 201 of 2164 specs Feb 15 10:47:16.437: INFO: >>> kubeConfig: /root/.kube/config Feb 15 10:47:16.444: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 15 10:47:16.464: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 15 10:47:16.572: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 15 10:47:16.572: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 15 10:47:16.572: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 15 10:47:16.583: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 15 10:47:16.583: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 15 10:47:16.583: INFO: e2e test version: v1.13.12 Feb 15 10:47:16.585: INFO: kube-apiserver version: v1.13.8 SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:47:16.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Feb 15 10:47:16.689: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 10:47:16.699: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88574620-4fe0-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-db49j" to be "success or failure" Feb 15 10:47:16.705: INFO: Pod "downwardapi-volume-88574620-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.705787ms Feb 15 10:47:18.754: INFO: Pod "downwardapi-volume-88574620-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05498251s Feb 15 10:47:20.774: INFO: Pod "downwardapi-volume-88574620-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075438701s Feb 15 10:47:23.580: INFO: Pod "downwardapi-volume-88574620-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.881552799s Feb 15 10:47:25.605: INFO: Pod "downwardapi-volume-88574620-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.906339843s Feb 15 10:47:27.808: INFO: Pod "downwardapi-volume-88574620-4fe0-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.109616969s STEP: Saw pod success Feb 15 10:47:27.809: INFO: Pod "downwardapi-volume-88574620-4fe0-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 10:47:27.921: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-88574620-4fe0-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 10:47:28.146: INFO: Waiting for pod downwardapi-volume-88574620-4fe0-11ea-960a-0242ac110007 to disappear Feb 15 10:47:28.155: INFO: Pod downwardapi-volume-88574620-4fe0-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:47:28.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-db49j" for this suite. Feb 15 10:47:34.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:47:34.516: INFO: namespace: e2e-tests-downward-api-db49j, resource: bindings, ignored listing per whitelist Feb 15 10:47:34.659: INFO: namespace e2e-tests-downward-api-db49j deletion completed in 6.495958441s • [SLOW TEST:18.074 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:47:34.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0215 10:47:51.678385 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 15 10:47:51.678: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:47:51.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-hcxls" for this suite. Feb 15 10:48:19.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:48:19.907: INFO: namespace: e2e-tests-gc-hcxls, resource: bindings, ignored listing per whitelist Feb 15 10:48:19.955: INFO: namespace e2e-tests-gc-hcxls deletion completed in 28.272225635s • [SLOW TEST:45.296 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:48:19.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 10:48:20.220: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 15 10:48:25.240: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 15 10:48:31.261: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 15 10:48:33.285: INFO: Creating deployment "test-rollover-deployment" Feb 15 10:48:33.388: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 15 10:48:35.414: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 15 10:48:35.429: INFO: Ensure that both replica sets have 1 created replica Feb 15 10:48:35.440: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 15 10:48:35.466: INFO: Updating deployment test-rollover-deployment Feb 15 10:48:35.466: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 15 10:48:37.936: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 15 10:48:37.968: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 15 10:48:38.950: INFO: all replica sets need to contain the pod-template-hash label Feb 15 10:48:38.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360516, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 10:48:40.972: INFO: all replica sets need to contain the pod-template-hash label Feb 15 10:48:40.973: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360516, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 10:48:42.978: INFO: all replica sets need to contain the pod-template-hash label Feb 15 10:48:42.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360516, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 10:48:44.986: INFO: all replica sets need to contain the pod-template-hash label Feb 15 10:48:44.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360516, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 10:48:47.014: INFO: all replica sets need to contain the pod-template-hash label Feb 15 10:48:47.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360526, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 10:48:49.013: INFO: all replica sets need to contain the pod-template-hash label Feb 15 10:48:49.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360526, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 10:48:50.975: INFO: all replica sets need to contain the pod-template-hash label Feb 15 10:48:50.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360526, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 10:48:52.976: INFO: all replica sets need to contain the pod-template-hash label Feb 15 10:48:52.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360526, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 10:48:54.979: INFO: all replica sets need to contain the pod-template-hash label Feb 15 10:48:54.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360526, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 10:48:57.011: INFO: all replica sets need to contain the pod-template-hash label Feb 15 10:48:57.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360526, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717360513, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 10:48:58.989: INFO: Feb 15 10:48:58.989: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 15 10:48:59.048: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-5st6d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5st6d/deployments/test-rollover-deployment,UID:b5ffa1d9-4fe0-11ea-a994-fa163e34d433,ResourceVersion:21742605,Generation:2,CreationTimestamp:2020-02-15 10:48:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-15 10:48:33 +0000 UTC 2020-02-15 10:48:33 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-15 10:48:57 +0000 UTC 2020-02-15 10:48:33 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 15 10:48:59.061: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-5st6d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5st6d/replicasets/test-rollover-deployment-5b8479fdb6,UID:b74b57b7-4fe0-11ea-a994-fa163e34d433,ResourceVersion:21742596,Generation:2,CreationTimestamp:2020-02-15 10:48:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b5ffa1d9-4fe0-11ea-a994-fa163e34d433 0xc001f6c7e7 0xc001f6c7e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 15 10:48:59.061: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 15 10:48:59.062: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-5st6d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5st6d/replicasets/test-rollover-controller,UID:ae33ec0b-4fe0-11ea-a994-fa163e34d433,ResourceVersion:21742604,Generation:2,CreationTimestamp:2020-02-15 10:48:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b5ffa1d9-4fe0-11ea-a994-fa163e34d433 0xc001f6c63f 0xc001f6c650}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 15 10:48:59.063: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-5st6d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5st6d/replicasets/test-rollover-deployment-58494b7559,UID:b616ccb1-4fe0-11ea-a994-fa163e34d433,ResourceVersion:21742554,Generation:2,CreationTimestamp:2020-02-15 10:48:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b5ffa1d9-4fe0-11ea-a994-fa163e34d433 0xc001f6c717 0xc001f6c718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 15 10:48:59.134: INFO: Pod "test-rollover-deployment-5b8479fdb6-tslfg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-tslfg,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-5st6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5st6d/pods/test-rollover-deployment-5b8479fdb6-tslfg,UID:b7f92363-4fe0-11ea-a994-fa163e34d433,ResourceVersion:21742581,Generation:0,CreationTimestamp:2020-02-15 10:48:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 b74b57b7-4fe0-11ea-a994-fa163e34d433 0xc001f6d397 0xc001f6d398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-br9p4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-br9p4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-br9p4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f6d400} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f6d420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 10:48:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 10:48:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 10:48:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 10:48:36 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-15 10:48:37 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-15 10:48:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://05de42494aa909bc7509d65c53e2a431c0c8acaa3fd4decfc2f4288a6a32cd8f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:48:59.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-5st6d" for this suite. Feb 15 10:49:07.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:49:07.271: INFO: namespace: e2e-tests-deployment-5st6d, resource: bindings, ignored listing per whitelist Feb 15 10:49:07.410: INFO: namespace e2e-tests-deployment-5st6d deletion completed in 8.252906658s • [SLOW TEST:47.454 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:49:07.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 10:49:09.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb21c73b-4fe0-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-9hhr7" to be "success or failure" Feb 15 10:49:09.569: INFO: Pod "downwardapi-volume-cb21c73b-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 32.449199ms Feb 15 10:49:11.646: INFO: Pod "downwardapi-volume-cb21c73b-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110097093s Feb 15 10:49:13.674: INFO: Pod "downwardapi-volume-cb21c73b-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137405489s Feb 15 10:49:16.356: INFO: Pod "downwardapi-volume-cb21c73b-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.81993674s Feb 15 10:49:18.371: INFO: Pod "downwardapi-volume-cb21c73b-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.834696521s Feb 15 10:49:20.499: INFO: Pod "downwardapi-volume-cb21c73b-4fe0-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.962511391s STEP: Saw pod success Feb 15 10:49:20.499: INFO: Pod "downwardapi-volume-cb21c73b-4fe0-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 10:49:20.507: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cb21c73b-4fe0-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 10:49:20.691: INFO: Waiting for pod downwardapi-volume-cb21c73b-4fe0-11ea-960a-0242ac110007 to disappear Feb 15 10:49:20.700: INFO: Pod downwardapi-volume-cb21c73b-4fe0-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:49:20.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9hhr7" for this suite. Feb 15 10:49:30.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:49:30.842: INFO: namespace: e2e-tests-downward-api-9hhr7, resource: bindings, ignored listing per whitelist Feb 15 10:49:30.906: INFO: namespace e2e-tests-downward-api-9hhr7 deletion completed in 10.198850152s • [SLOW TEST:23.496 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:49:30.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-dcdmt/configmap-test-d874cdfc-4fe0-11ea-960a-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 15 10:49:31.127: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8761962-4fe0-11ea-960a-0242ac110007" in namespace "e2e-tests-configmap-dcdmt" to be "success or failure" Feb 15 10:49:31.138: INFO: Pod "pod-configmaps-d8761962-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.763985ms Feb 15 10:49:34.790: INFO: Pod "pod-configmaps-d8761962-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.662740437s Feb 15 10:49:36.800: INFO: Pod "pod-configmaps-d8761962-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.673567499s Feb 15 10:49:39.907: INFO: Pod "pod-configmaps-d8761962-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.779867302s Feb 15 10:49:41.930: INFO: Pod "pod-configmaps-d8761962-4fe0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.803456642s Feb 15 10:49:43.952: INFO: Pod "pod-configmaps-d8761962-4fe0-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.824578439s STEP: Saw pod success Feb 15 10:49:43.952: INFO: Pod "pod-configmaps-d8761962-4fe0-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 10:49:43.964: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d8761962-4fe0-11ea-960a-0242ac110007 container env-test: STEP: delete the pod Feb 15 10:49:44.163: INFO: Waiting for pod pod-configmaps-d8761962-4fe0-11ea-960a-0242ac110007 to disappear Feb 15 10:49:44.195: INFO: Pod pod-configmaps-d8761962-4fe0-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:49:44.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dcdmt" for this suite. Feb 15 10:49:50.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:49:50.664: INFO: namespace: e2e-tests-configmap-dcdmt, resource: bindings, ignored listing per whitelist Feb 15 10:49:50.789: INFO: namespace e2e-tests-configmap-dcdmt deletion completed in 6.5770049s • [SLOW TEST:19.883 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:49:50.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Feb 15 10:49:50.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:49:53.719: INFO: stderr: "" Feb 15 10:49:53.719: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 15 10:49:53.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:49:54.100: INFO: stderr: "" Feb 15 10:49:54.101: INFO: stdout: "update-demo-nautilus-99snr update-demo-nautilus-mk57r " Feb 15 10:49:54.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-99snr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:49:54.259: INFO: stderr: "" Feb 15 10:49:54.259: INFO: stdout: "" Feb 15 10:49:54.259: INFO: update-demo-nautilus-99snr is created but not running Feb 15 10:49:59.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:49:59.434: INFO: stderr: "" Feb 15 10:49:59.434: INFO: stdout: "update-demo-nautilus-99snr update-demo-nautilus-mk57r " Feb 15 10:49:59.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-99snr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:49:59.565: INFO: stderr: "" Feb 15 10:49:59.565: INFO: stdout: "" Feb 15 10:49:59.565: INFO: update-demo-nautilus-99snr is created but not running Feb 15 10:50:04.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:04.727: INFO: stderr: "" Feb 15 10:50:04.727: INFO: stdout: "update-demo-nautilus-99snr update-demo-nautilus-mk57r " Feb 15 10:50:04.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-99snr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:05.022: INFO: stderr: "" Feb 15 10:50:05.022: INFO: stdout: "" Feb 15 10:50:05.022: INFO: update-demo-nautilus-99snr is created but not running Feb 15 10:50:10.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:10.220: INFO: stderr: "" Feb 15 10:50:10.220: INFO: stdout: "update-demo-nautilus-99snr update-demo-nautilus-mk57r " Feb 15 10:50:10.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-99snr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:10.327: INFO: stderr: "" Feb 15 10:50:10.327: INFO: stdout: "true" Feb 15 10:50:10.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-99snr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:10.471: INFO: stderr: "" Feb 15 10:50:10.471: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 10:50:10.471: INFO: validating pod update-demo-nautilus-99snr Feb 15 10:50:10.507: INFO: got data: { "image": "nautilus.jpg" } Feb 15 10:50:10.507: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 10:50:10.507: INFO: update-demo-nautilus-99snr is verified up and running Feb 15 10:50:10.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mk57r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:10.696: INFO: stderr: "" Feb 15 10:50:10.696: INFO: stdout: "true" Feb 15 10:50:10.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mk57r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:10.816: INFO: stderr: "" Feb 15 10:50:10.816: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 10:50:10.816: INFO: validating pod update-demo-nautilus-mk57r Feb 15 10:50:10.829: INFO: got data: { "image": "nautilus.jpg" } Feb 15 10:50:10.829: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 10:50:10.829: INFO: update-demo-nautilus-mk57r is verified up and running STEP: rolling-update to new replication controller Feb 15 10:50:10.831: INFO: scanned /root for discovery docs: Feb 15 10:50:10.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:52.075: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 15 10:50:52.075: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 15 10:50:52.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:52.318: INFO: stderr: "" Feb 15 10:50:52.318: INFO: stdout: "update-demo-kitten-kdvlh update-demo-kitten-l5mxj update-demo-nautilus-mk57r " STEP: Replicas for name=update-demo: expected=2 actual=3 Feb 15 10:50:57.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:57.527: INFO: stderr: "" Feb 15 10:50:57.527: INFO: stdout: "update-demo-kitten-kdvlh update-demo-kitten-l5mxj " Feb 15 10:50:57.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kdvlh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:57.644: INFO: stderr: "" Feb 15 10:50:57.644: INFO: stdout: "true" Feb 15 10:50:57.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kdvlh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:57.818: INFO: stderr: "" Feb 15 10:50:57.818: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 15 10:50:57.818: INFO: validating pod update-demo-kitten-kdvlh Feb 15 10:50:57.847: INFO: got data: { "image": "kitten.jpg" } Feb 15 10:50:57.847: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 15 10:50:57.847: INFO: update-demo-kitten-kdvlh is verified up and running Feb 15 10:50:57.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l5mxj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:57.988: INFO: stderr: "" Feb 15 10:50:57.989: INFO: stdout: "true" Feb 15 10:50:57.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l5mxj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r5hbm' Feb 15 10:50:58.138: INFO: stderr: "" Feb 15 10:50:58.138: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 15 10:50:58.138: INFO: validating pod update-demo-kitten-l5mxj Feb 15 10:50:58.149: INFO: got data: { "image": "kitten.jpg" } Feb 15 10:50:58.149: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 15 10:50:58.149: INFO: update-demo-kitten-l5mxj is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:50:58.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r5hbm" for this suite. Feb 15 10:51:24.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:51:24.241: INFO: namespace: e2e-tests-kubectl-r5hbm, resource: bindings, ignored listing per whitelist Feb 15 10:51:24.380: INFO: namespace e2e-tests-kubectl-r5hbm deletion completed in 26.224220853s • [SLOW TEST:93.590 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:51:24.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 15 10:51:24.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-896fr' Feb 15 10:51:25.112: INFO: stderr: "" Feb 15 10:51:25.112: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Feb 15 10:51:25.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-896fr' Feb 15 10:51:30.864: INFO: stderr: "" Feb 15 10:51:30.864: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:51:30.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-896fr" for this suite. Feb 15 10:51:37.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:51:37.161: INFO: namespace: e2e-tests-kubectl-896fr, resource: bindings, ignored listing per whitelist Feb 15 10:51:37.328: INFO: namespace e2e-tests-kubectl-896fr deletion completed in 6.385357177s • [SLOW TEST:12.947 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:51:37.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 10:51:37.571: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 15 10:51:42.604: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 15 10:51:54.671: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 15 10:51:54.871: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-7bsd9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7bsd9/deployments/test-cleanup-deployment,UID:2e101d21-4fe1-11ea-a994-fa163e34d433,ResourceVersion:21743050,Generation:1,CreationTimestamp:2020-02-15 10:51:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 15 10:51:54.887: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Feb 15 10:51:54.887: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 15 10:51:54.887: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-7bsd9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7bsd9/replicasets/test-cleanup-controller,UID:23d4100c-4fe1-11ea-a994-fa163e34d433,ResourceVersion:21743051,Generation:1,CreationTimestamp:2020-02-15 10:51:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 2e101d21-4fe1-11ea-a994-fa163e34d433 0xc00183f617 0xc00183f618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 15 10:51:54.931: INFO: Pod "test-cleanup-controller-dv7pq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-dv7pq,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-7bsd9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7bsd9/pods/test-cleanup-controller-dv7pq,UID:23deb5c4-4fe1-11ea-a994-fa163e34d433,ResourceVersion:21743045,Generation:0,CreationTimestamp:2020-02-15 10:51:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 23d4100c-4fe1-11ea-a994-fa163e34d433 0xc001223d17 0xc001223d18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sxfqb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxfqb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-sxfqb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001223e30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001223ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 10:51:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 10:51:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 10:51:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 10:51:37 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-15 10:51:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 10:51:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b4e0408f4c07be80b342ea928de68c5bf45517f9ddd92bb5f072cf826a837432}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:51:54.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-7bsd9" for this suite. Feb 15 10:52:05.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:52:05.195: INFO: namespace: e2e-tests-deployment-7bsd9, resource: bindings, ignored listing per whitelist Feb 15 10:52:05.279: INFO: namespace e2e-tests-deployment-7bsd9 deletion completed in 10.249234982s • [SLOW TEST:27.950 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:52:05.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:52:21.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-q457f" for this suite. Feb 15 10:52:45.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:52:45.808: INFO: namespace: e2e-tests-replication-controller-q457f, resource: bindings, ignored listing per whitelist Feb 15 10:52:45.887: INFO: namespace e2e-tests-replication-controller-q457f deletion completed in 24.231283515s • [SLOW TEST:40.608 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:52:45.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Feb 15 10:52:46.692: INFO: Waiting up to 5m0s for pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph" in namespace "e2e-tests-svcaccounts-sd7pr" to be "success or failure" Feb 15 10:52:46.725: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph": Phase="Pending", Reason="", readiness=false. Elapsed: 33.103582ms Feb 15 10:52:48.748: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055514234s Feb 15 10:52:50.776: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083474922s Feb 15 10:52:52.799: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106972757s Feb 15 10:52:55.536: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph": Phase="Pending", Reason="", readiness=false. Elapsed: 8.843617906s Feb 15 10:52:57.560: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph": Phase="Pending", Reason="", readiness=false. Elapsed: 10.867854689s Feb 15 10:52:59.844: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph": Phase="Pending", Reason="", readiness=false. Elapsed: 13.151793214s Feb 15 10:53:01.883: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph": Phase="Running", Reason="", readiness=false. Elapsed: 15.190686881s Feb 15 10:53:03.916: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph": Phase="Running", Reason="", readiness=false. Elapsed: 17.224120893s Feb 15 10:53:05.934: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.241689046s STEP: Saw pod success Feb 15 10:53:05.934: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph" satisfied condition "success or failure" Feb 15 10:53:05.942: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph container token-test: STEP: delete the pod Feb 15 10:53:06.070: INFO: Waiting for pod pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph to disappear Feb 15 10:53:06.076: INFO: Pod pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-sqlph no longer exists STEP: Creating a pod to test consume service account root CA Feb 15 10:53:06.088: INFO: Waiting up to 5m0s for pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf" in namespace "e2e-tests-svcaccounts-sd7pr" to be "success or failure" Feb 15 10:53:06.104: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.22554ms Feb 15 10:53:08.121: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033069957s Feb 15 10:53:10.136: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048283479s Feb 15 10:53:12.896: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.808373808s Feb 15 10:53:14.943: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.855263659s Feb 15 10:53:16.961: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872642963s Feb 15 10:53:19.115: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.027292526s Feb 15 10:53:21.264: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.175655166s Feb 15 10:53:23.282: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf": Phase="Pending", Reason="", readiness=false. Elapsed: 17.19425181s Feb 15 10:53:25.297: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.20949723s STEP: Saw pod success Feb 15 10:53:25.298: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf" satisfied condition "success or failure" Feb 15 10:53:25.302: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf container root-ca-test: STEP: delete the pod Feb 15 10:53:26.028: INFO: Waiting for pod pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf to disappear Feb 15 10:53:26.211: INFO: Pod pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-l8gnf no longer exists STEP: Creating a pod to test consume service account namespace Feb 15 10:53:26.252: INFO: Waiting up to 5m0s for pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l" in namespace "e2e-tests-svcaccounts-sd7pr" to be "success or failure" Feb 15 10:53:26.264: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l": Phase="Pending", Reason="", readiness=false. Elapsed: 11.493238ms Feb 15 10:53:29.882: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l": Phase="Pending", Reason="", readiness=false. Elapsed: 3.629539458s Feb 15 10:53:31.922: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l": Phase="Pending", Reason="", readiness=false. Elapsed: 5.669286099s Feb 15 10:53:34.860: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607950579s Feb 15 10:53:36.878: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.625368595s Feb 15 10:53:38.902: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l": Phase="Pending", Reason="", readiness=false. Elapsed: 12.649432375s Feb 15 10:53:41.365: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l": Phase="Pending", Reason="", readiness=false. Elapsed: 15.112953139s Feb 15 10:53:43.380: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l": Phase="Pending", Reason="", readiness=false. Elapsed: 17.128220128s Feb 15 10:53:45.393: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.140749469s STEP: Saw pod success Feb 15 10:53:45.393: INFO: Pod "pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l" satisfied condition "success or failure" Feb 15 10:53:45.398: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l container namespace-test: STEP: delete the pod Feb 15 10:53:46.266: INFO: Waiting for pod pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l to disappear Feb 15 10:53:46.523: INFO: Pod pod-service-account-4d0355cf-4fe1-11ea-960a-0242ac110007-4ls9l no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:53:46.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-sd7pr" for this suite. Feb 15 10:53:54.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:53:54.737: INFO: namespace: e2e-tests-svcaccounts-sd7pr, resource: bindings, ignored listing per whitelist Feb 15 10:53:54.811: INFO: namespace e2e-tests-svcaccounts-sd7pr deletion completed in 8.253424317s • [SLOW TEST:68.923 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:53:54.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Feb 15 10:53:55.204: INFO: Waiting up to 5m0s for pod "pod-75dd8a46-4fe1-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-f2lxt" to be "success or failure" Feb 15 10:53:55.242: INFO: Pod "pod-75dd8a46-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 38.462232ms Feb 15 10:53:57.615: INFO: Pod "pod-75dd8a46-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.411425546s Feb 15 10:53:59.840: INFO: Pod "pod-75dd8a46-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.635888303s Feb 15 10:54:02.840: INFO: Pod "pod-75dd8a46-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.635573204s Feb 15 10:54:04.972: INFO: Pod "pod-75dd8a46-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.76806633s Feb 15 10:54:07.024: INFO: Pod "pod-75dd8a46-4fe1-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.820374765s STEP: Saw pod success Feb 15 10:54:07.025: INFO: Pod "pod-75dd8a46-4fe1-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 10:54:07.042: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-75dd8a46-4fe1-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 10:54:07.299: INFO: Waiting for pod pod-75dd8a46-4fe1-11ea-960a-0242ac110007 to disappear Feb 15 10:54:07.308: INFO: Pod pod-75dd8a46-4fe1-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:54:07.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-f2lxt" for this suite. Feb 15 10:54:13.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:54:13.382: INFO: namespace: e2e-tests-emptydir-f2lxt, resource: bindings, ignored listing per whitelist Feb 15 10:54:13.656: INFO: namespace e2e-tests-emptydir-f2lxt deletion completed in 6.341551746s • [SLOW TEST:18.844 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:54:13.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Feb 15 10:54:14.690: INFO: created pod pod-service-account-defaultsa Feb 15 10:54:14.691: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 15 10:54:14.856: INFO: created pod pod-service-account-mountsa Feb 15 10:54:14.856: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 15 10:54:14.924: INFO: created pod pod-service-account-nomountsa Feb 15 10:54:14.925: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 15 10:54:15.067: INFO: created pod pod-service-account-defaultsa-mountspec Feb 15 10:54:15.067: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 15 10:54:15.106: INFO: created pod pod-service-account-mountsa-mountspec Feb 15 10:54:15.107: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 15 10:54:15.126: INFO: created pod pod-service-account-nomountsa-mountspec Feb 15 10:54:15.126: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 15 10:54:15.160: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 15 10:54:15.160: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 15 10:54:15.362: INFO: created pod pod-service-account-mountsa-nomountspec Feb 15 10:54:15.362: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 15 10:54:16.795: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 15 10:54:16.795: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:54:16.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-pgcmc" for this suite. Feb 15 10:54:46.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:54:46.222: INFO: namespace: e2e-tests-svcaccounts-pgcmc, resource: bindings, ignored listing per whitelist Feb 15 10:54:46.272: INFO: namespace e2e-tests-svcaccounts-pgcmc deletion completed in 28.69026709s • [SLOW TEST:32.616 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:54:46.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Feb 15 10:54:46.503: INFO: Waiting up to 5m0s for pod "var-expansion-946e7528-4fe1-11ea-960a-0242ac110007" in namespace "e2e-tests-var-expansion-qm494" to be "success or failure" Feb 15 10:54:46.637: INFO: Pod "var-expansion-946e7528-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 133.346005ms Feb 15 10:54:48.687: INFO: Pod "var-expansion-946e7528-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18371532s Feb 15 10:54:50.733: INFO: Pod "var-expansion-946e7528-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229751987s Feb 15 10:54:52.750: INFO: Pod "var-expansion-946e7528-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246589355s Feb 15 10:54:55.207: INFO: Pod "var-expansion-946e7528-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.703190351s Feb 15 10:54:57.232: INFO: Pod "var-expansion-946e7528-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.728468783s Feb 15 10:54:59.249: INFO: Pod "var-expansion-946e7528-4fe1-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.745596301s STEP: Saw pod success Feb 15 10:54:59.249: INFO: Pod "var-expansion-946e7528-4fe1-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 10:54:59.255: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-946e7528-4fe1-11ea-960a-0242ac110007 container dapi-container: STEP: delete the pod Feb 15 10:54:59.459: INFO: Waiting for pod var-expansion-946e7528-4fe1-11ea-960a-0242ac110007 to disappear Feb 15 10:54:59.562: INFO: Pod var-expansion-946e7528-4fe1-11ea-960a-0242ac110007 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:54:59.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-qm494" for this suite. Feb 15 10:55:05.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:55:05.818: INFO: namespace: e2e-tests-var-expansion-qm494, resource: bindings, ignored listing per whitelist Feb 15 10:55:05.839: INFO: namespace e2e-tests-var-expansion-qm494 deletion completed in 6.258974548s • [SLOW TEST:19.567 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:55:05.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-a01bbfe5-4fe1-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 10:55:06.525: INFO: Waiting up to 5m0s for pod "pod-secrets-a05cc45d-4fe1-11ea-960a-0242ac110007" in namespace "e2e-tests-secrets-9qhxf" to be "success or failure" Feb 15 10:55:06.711: INFO: Pod "pod-secrets-a05cc45d-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 185.310577ms Feb 15 10:55:08.723: INFO: Pod "pod-secrets-a05cc45d-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197870438s Feb 15 10:55:10.743: INFO: Pod "pod-secrets-a05cc45d-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21750697s Feb 15 10:55:12.984: INFO: Pod "pod-secrets-a05cc45d-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.458243828s Feb 15 10:55:15.001: INFO: Pod "pod-secrets-a05cc45d-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.475104291s Feb 15 10:55:17.012: INFO: Pod "pod-secrets-a05cc45d-4fe1-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.48691035s STEP: Saw pod success Feb 15 10:55:17.013: INFO: Pod "pod-secrets-a05cc45d-4fe1-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 10:55:17.017: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a05cc45d-4fe1-11ea-960a-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 15 10:55:17.189: INFO: Waiting for pod pod-secrets-a05cc45d-4fe1-11ea-960a-0242ac110007 to disappear Feb 15 10:55:17.202: INFO: Pod pod-secrets-a05cc45d-4fe1-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:55:17.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9qhxf" for this suite. Feb 15 10:55:24.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:55:24.145: INFO: namespace: e2e-tests-secrets-9qhxf, resource: bindings, ignored listing per whitelist Feb 15 10:55:24.240: INFO: namespace e2e-tests-secrets-9qhxf deletion completed in 7.014105412s STEP: Destroying namespace "e2e-tests-secret-namespace-wdcx5" for this suite. Feb 15 10:55:30.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:55:30.303: INFO: namespace: e2e-tests-secret-namespace-wdcx5, resource: bindings, ignored listing per whitelist Feb 15 10:55:30.574: INFO: namespace e2e-tests-secret-namespace-wdcx5 deletion completed in 6.333988478s • [SLOW TEST:24.735 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:55:30.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:55:37.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-mgcr4" for this suite. Feb 15 10:55:43.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:55:44.089: INFO: namespace: e2e-tests-namespaces-mgcr4, resource: bindings, ignored listing per whitelist Feb 15 10:55:44.103: INFO: namespace e2e-tests-namespaces-mgcr4 deletion completed in 6.718759261s STEP: Destroying namespace "e2e-tests-nsdeletetest-zrdzd" for this suite. Feb 15 10:55:44.107: INFO: Namespace e2e-tests-nsdeletetest-zrdzd was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-2wgtv" for this suite. Feb 15 10:55:50.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:55:50.231: INFO: namespace: e2e-tests-nsdeletetest-2wgtv, resource: bindings, ignored listing per whitelist Feb 15 10:55:50.533: INFO: namespace e2e-tests-nsdeletetest-2wgtv deletion completed in 6.426102388s • [SLOW TEST:19.958 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:55:50.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-baccea8f-4fe1-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 10:55:50.910: INFO: Waiting up to 5m0s for pod "pod-secrets-bacdba13-4fe1-11ea-960a-0242ac110007" in namespace "e2e-tests-secrets-swgw6" to be "success or failure" Feb 15 10:55:50.970: INFO: Pod "pod-secrets-bacdba13-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 60.241328ms Feb 15 10:55:52.982: INFO: Pod "pod-secrets-bacdba13-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07196447s Feb 15 10:55:54.993: INFO: Pod "pod-secrets-bacdba13-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083096057s Feb 15 10:55:57.834: INFO: Pod "pod-secrets-bacdba13-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.92422439s Feb 15 10:55:59.848: INFO: Pod "pod-secrets-bacdba13-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.937802993s Feb 15 10:56:01.868: INFO: Pod "pod-secrets-bacdba13-4fe1-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.957471371s STEP: Saw pod success Feb 15 10:56:01.868: INFO: Pod "pod-secrets-bacdba13-4fe1-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 10:56:01.872: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-bacdba13-4fe1-11ea-960a-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 15 10:56:03.101: INFO: Waiting for pod pod-secrets-bacdba13-4fe1-11ea-960a-0242ac110007 to disappear Feb 15 10:56:03.154: INFO: Pod pod-secrets-bacdba13-4fe1-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:56:03.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-swgw6" for this suite. Feb 15 10:56:11.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:56:11.660: INFO: namespace: e2e-tests-secrets-swgw6, resource: bindings, ignored listing per whitelist Feb 15 10:56:11.861: INFO: namespace e2e-tests-secrets-swgw6 deletion completed in 8.533341166s • [SLOW TEST:21.327 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:56:11.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Feb 15 10:56:12.051: INFO: Waiting up to 5m0s for pod "client-containers-c76d67c3-4fe1-11ea-960a-0242ac110007" in namespace "e2e-tests-containers-9v9j7" to be "success or failure" Feb 15 10:56:12.064: INFO: Pod "client-containers-c76d67c3-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.273158ms Feb 15 10:56:14.302: INFO: Pod "client-containers-c76d67c3-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251523027s Feb 15 10:56:16.321: INFO: Pod "client-containers-c76d67c3-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270106641s Feb 15 10:56:18.378: INFO: Pod "client-containers-c76d67c3-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.327594264s Feb 15 10:56:20.873: INFO: Pod "client-containers-c76d67c3-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.822697048s Feb 15 10:56:22.885: INFO: Pod "client-containers-c76d67c3-4fe1-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.834653837s STEP: Saw pod success Feb 15 10:56:22.885: INFO: Pod "client-containers-c76d67c3-4fe1-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 10:56:22.891: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-c76d67c3-4fe1-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 10:56:23.754: INFO: Waiting for pod client-containers-c76d67c3-4fe1-11ea-960a-0242ac110007 to disappear Feb 15 10:56:23.766: INFO: Pod client-containers-c76d67c3-4fe1-11ea-960a-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:56:23.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-9v9j7" for this suite. Feb 15 10:56:29.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:56:29.978: INFO: namespace: e2e-tests-containers-9v9j7, resource: bindings, ignored listing per whitelist Feb 15 10:56:30.101: INFO: namespace e2e-tests-containers-9v9j7 deletion completed in 6.320779398s • [SLOW TEST:18.238 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:56:30.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-d253a682-4fe1-11ea-960a-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 15 10:56:30.410: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d25480e7-4fe1-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-pdwds" to be "success or failure" Feb 15 10:56:30.431: INFO: Pod "pod-projected-configmaps-d25480e7-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 21.007461ms Feb 15 10:56:32.459: INFO: Pod "pod-projected-configmaps-d25480e7-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048616525s Feb 15 10:56:34.479: INFO: Pod "pod-projected-configmaps-d25480e7-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068281088s Feb 15 10:56:36.839: INFO: Pod "pod-projected-configmaps-d25480e7-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428505704s Feb 15 10:56:38.880: INFO: Pod "pod-projected-configmaps-d25480e7-4fe1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470127122s Feb 15 10:56:40.953: INFO: Pod "pod-projected-configmaps-d25480e7-4fe1-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.542643921s STEP: Saw pod success Feb 15 10:56:40.954: INFO: Pod "pod-projected-configmaps-d25480e7-4fe1-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 10:56:40.973: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d25480e7-4fe1-11ea-960a-0242ac110007 container projected-configmap-volume-test: STEP: delete the pod Feb 15 10:56:41.220: INFO: Waiting for pod pod-projected-configmaps-d25480e7-4fe1-11ea-960a-0242ac110007 to disappear Feb 15 10:56:41.324: INFO: Pod pod-projected-configmaps-d25480e7-4fe1-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 10:56:41.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pdwds" for this suite. Feb 15 10:56:47.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 10:56:47.635: INFO: namespace: e2e-tests-projected-pdwds, resource: bindings, ignored listing per whitelist Feb 15 10:56:47.663: INFO: namespace e2e-tests-projected-pdwds deletion completed in 6.325885485s • [SLOW TEST:17.562 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 10:56:47.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-2zjrr Feb 15 10:56:57.950: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-2zjrr STEP: checking the pod's current state and verifying that restartCount is present Feb 15 10:56:57.959: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:00:59.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-2zjrr" for this suite. Feb 15 11:01:08.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:01:08.263: INFO: namespace: e2e-tests-container-probe-2zjrr, resource: bindings, ignored listing per whitelist Feb 15 11:01:08.271: INFO: namespace e2e-tests-container-probe-2zjrr deletion completed in 8.381478046s • [SLOW TEST:260.607 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:01:08.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-78339a28-4fe2-11ea-960a-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 15 11:01:08.635: INFO: Waiting up to 5m0s for pod "pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007" in namespace "e2e-tests-configmap-kqd5p" to be "success or failure" Feb 15 11:01:08.649: INFO: Pod "pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.729529ms Feb 15 11:01:10.685: INFO: Pod "pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050389307s Feb 15 11:01:12.714: INFO: Pod "pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078695545s Feb 15 11:01:15.052: INFO: Pod "pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417103703s Feb 15 11:01:17.958: INFO: Pod "pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.32280123s Feb 15 11:01:19.974: INFO: Pod "pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.338815874s Feb 15 11:01:21.989: INFO: Pod "pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.354322159s STEP: Saw pod success Feb 15 11:01:21.989: INFO: Pod "pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:01:21.994: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 15 11:01:22.953: INFO: Waiting for pod pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007 to disappear Feb 15 11:01:23.155: INFO: Pod pod-configmaps-7834ec90-4fe2-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:01:23.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kqd5p" for this suite. Feb 15 11:01:29.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:01:29.365: INFO: namespace: e2e-tests-configmap-kqd5p, resource: bindings, ignored listing per whitelist Feb 15 11:01:29.470: INFO: namespace e2e-tests-configmap-kqd5p deletion completed in 6.301368008s • [SLOW TEST:21.199 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:01:29.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-84bbfb52-4fe2-11ea-960a-0242ac110007 STEP: Creating configMap with name cm-test-opt-upd-84bbffae-4fe2-11ea-960a-0242ac110007 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-84bbfb52-4fe2-11ea-960a-0242ac110007 STEP: Updating configmap cm-test-opt-upd-84bbffae-4fe2-11ea-960a-0242ac110007 STEP: Creating configMap with name cm-test-opt-create-84bc002e-4fe2-11ea-960a-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:02:54.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rjsvk" for this suite. Feb 15 11:03:18.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:03:18.749: INFO: namespace: e2e-tests-projected-rjsvk, resource: bindings, ignored listing per whitelist Feb 15 11:03:18.819: INFO: namespace e2e-tests-projected-rjsvk deletion completed in 24.145432469s • [SLOW TEST:109.348 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:03:18.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 15 11:03:18.954: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 15 11:03:19.031: INFO: Waiting for terminating namespaces to be deleted... Feb 15 11:03:19.036: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 15 11:03:19.069: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 15 11:03:19.069: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 15 11:03:19.069: INFO: Container weave ready: true, restart count 0 Feb 15 11:03:19.069: INFO: Container weave-npc ready: true, restart count 0 Feb 15 11:03:19.069: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 15 11:03:19.069: INFO: Container coredns ready: true, restart count 0 Feb 15 11:03:19.069: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 15 11:03:19.069: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 15 11:03:19.069: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 15 11:03:19.069: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 15 11:03:19.069: INFO: Container coredns ready: true, restart count 0 Feb 15 11:03:19.069: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 15 11:03:19.069: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-cd41ab58-4fe2-11ea-960a-0242ac110007 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-cd41ab58-4fe2-11ea-960a-0242ac110007 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-cd41ab58-4fe2-11ea-960a-0242ac110007 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:03:43.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-9l8wp" for this suite. Feb 15 11:04:07.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:04:07.712: INFO: namespace: e2e-tests-sched-pred-9l8wp, resource: bindings, ignored listing per whitelist Feb 15 11:04:07.829: INFO: namespace e2e-tests-sched-pred-9l8wp deletion completed in 24.204454186s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:49.009 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:04:07.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-mkp6 STEP: Creating a pod to test atomic-volume-subpath Feb 15 11:04:08.407: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mkp6" in namespace "e2e-tests-subpath-5hthj" to be "success or failure" Feb 15 11:04:08.497: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Pending", Reason="", readiness=false. Elapsed: 89.476297ms Feb 15 11:04:10.626: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218457873s Feb 15 11:04:12.644: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237148647s Feb 15 11:04:14.703: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296241399s Feb 15 11:04:16.771: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363488383s Feb 15 11:04:18.782: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.374485349s Feb 15 11:04:20.799: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.392009843s Feb 15 11:04:22.854: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.44710468s Feb 15 11:04:24.893: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.4862777s Feb 15 11:04:26.912: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Running", Reason="", readiness=false. Elapsed: 18.505296709s Feb 15 11:04:28.931: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Running", Reason="", readiness=false. Elapsed: 20.524156582s Feb 15 11:04:30.950: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Running", Reason="", readiness=false. Elapsed: 22.543004947s Feb 15 11:04:32.972: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Running", Reason="", readiness=false. Elapsed: 24.564687566s Feb 15 11:04:34.992: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Running", Reason="", readiness=false. Elapsed: 26.58513457s Feb 15 11:04:37.007: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Running", Reason="", readiness=false. Elapsed: 28.599859868s Feb 15 11:04:39.027: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Running", Reason="", readiness=false. Elapsed: 30.619822639s Feb 15 11:04:41.045: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Running", Reason="", readiness=false. Elapsed: 32.637980175s Feb 15 11:04:43.071: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Running", Reason="", readiness=false. Elapsed: 34.663967712s Feb 15 11:04:45.082: INFO: Pod "pod-subpath-test-secret-mkp6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.675270078s STEP: Saw pod success Feb 15 11:04:45.082: INFO: Pod "pod-subpath-test-secret-mkp6" satisfied condition "success or failure" Feb 15 11:04:45.095: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-mkp6 container test-container-subpath-secret-mkp6: STEP: delete the pod Feb 15 11:04:45.739: INFO: Waiting for pod pod-subpath-test-secret-mkp6 to disappear Feb 15 11:04:46.120: INFO: Pod pod-subpath-test-secret-mkp6 no longer exists STEP: Deleting pod pod-subpath-test-secret-mkp6 Feb 15 11:04:46.120: INFO: Deleting pod "pod-subpath-test-secret-mkp6" in namespace "e2e-tests-subpath-5hthj" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:04:46.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-5hthj" for this suite. Feb 15 11:04:52.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:04:52.296: INFO: namespace: e2e-tests-subpath-5hthj, resource: bindings, ignored listing per whitelist Feb 15 11:04:52.426: INFO: namespace e2e-tests-subpath-5hthj deletion completed in 6.283921499s • [SLOW TEST:44.596 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:04:52.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 15 11:04:52.879: INFO: Waiting up to 5m0s for pod "var-expansion-fdd37125-4fe2-11ea-960a-0242ac110007" in namespace "e2e-tests-var-expansion-jznhr" to be "success or failure" Feb 15 11:04:52.891: INFO: Pod "var-expansion-fdd37125-4fe2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.214515ms Feb 15 11:04:54.936: INFO: Pod "var-expansion-fdd37125-4fe2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056953324s Feb 15 11:04:56.948: INFO: Pod "var-expansion-fdd37125-4fe2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06865878s Feb 15 11:04:59.350: INFO: Pod "var-expansion-fdd37125-4fe2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470163349s Feb 15 11:05:01.393: INFO: Pod "var-expansion-fdd37125-4fe2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.513903247s Feb 15 11:05:03.825: INFO: Pod "var-expansion-fdd37125-4fe2-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.94578547s STEP: Saw pod success Feb 15 11:05:03.825: INFO: Pod "var-expansion-fdd37125-4fe2-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:05:03.835: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-fdd37125-4fe2-11ea-960a-0242ac110007 container dapi-container: STEP: delete the pod Feb 15 11:05:04.412: INFO: Waiting for pod var-expansion-fdd37125-4fe2-11ea-960a-0242ac110007 to disappear Feb 15 11:05:04.425: INFO: Pod var-expansion-fdd37125-4fe2-11ea-960a-0242ac110007 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:05:04.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-jznhr" for this suite. Feb 15 11:05:10.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:05:10.620: INFO: namespace: e2e-tests-var-expansion-jznhr, resource: bindings, ignored listing per whitelist Feb 15 11:05:10.693: INFO: namespace e2e-tests-var-expansion-jznhr deletion completed in 6.258305796s • [SLOW TEST:18.268 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:05:10.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 15 11:05:11.194: INFO: Number of nodes with available pods: 0 Feb 15 11:05:11.194: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:15.133: INFO: Number of nodes with available pods: 0 Feb 15 11:05:15.133: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:15.588: INFO: Number of nodes with available pods: 0 Feb 15 11:05:15.588: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:16.228: INFO: Number of nodes with available pods: 0 Feb 15 11:05:16.228: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:17.309: INFO: Number of nodes with available pods: 0 Feb 15 11:05:17.309: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:18.219: INFO: Number of nodes with available pods: 0 Feb 15 11:05:18.219: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:19.679: INFO: Number of nodes with available pods: 0 Feb 15 11:05:19.679: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:20.471: INFO: Number of nodes with available pods: 0 Feb 15 11:05:20.471: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:21.226: INFO: Number of nodes with available pods: 0 Feb 15 11:05:21.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:22.222: INFO: Number of nodes with available pods: 0 Feb 15 11:05:22.223: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:23.230: INFO: Number of nodes with available pods: 1 Feb 15 11:05:23.230: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 15 11:05:23.441: INFO: Number of nodes with available pods: 0 Feb 15 11:05:23.442: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:24.471: INFO: Number of nodes with available pods: 0 Feb 15 11:05:24.471: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:25.465: INFO: Number of nodes with available pods: 0 Feb 15 11:05:25.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:26.616: INFO: Number of nodes with available pods: 0 Feb 15 11:05:26.616: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:27.466: INFO: Number of nodes with available pods: 0 Feb 15 11:05:27.467: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:28.570: INFO: Number of nodes with available pods: 0 Feb 15 11:05:28.570: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:29.475: INFO: Number of nodes with available pods: 0 Feb 15 11:05:29.475: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:30.507: INFO: Number of nodes with available pods: 0 Feb 15 11:05:30.507: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:31.958: INFO: Number of nodes with available pods: 0 Feb 15 11:05:31.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:32.499: INFO: Number of nodes with available pods: 0 Feb 15 11:05:32.499: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:33.560: INFO: Number of nodes with available pods: 0 Feb 15 11:05:33.560: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:34.494: INFO: Number of nodes with available pods: 0 Feb 15 11:05:34.495: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:35.482: INFO: Number of nodes with available pods: 0 Feb 15 11:05:35.482: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:05:36.500: INFO: Number of nodes with available pods: 1 Feb 15 11:05:36.500: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-r4twp, will wait for the garbage collector to delete the pods Feb 15 11:05:36.620: INFO: Deleting DaemonSet.extensions daemon-set took: 44.54214ms Feb 15 11:05:36.821: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.761117ms Feb 15 11:05:52.666: INFO: Number of nodes with available pods: 0 Feb 15 11:05:52.666: INFO: Number of running nodes: 0, number of available pods: 0 Feb 15 11:05:52.680: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-r4twp/daemonsets","resourceVersion":"21744695"},"items":null} Feb 15 11:05:52.688: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-r4twp/pods","resourceVersion":"21744695"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:05:52.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-r4twp" for this suite. Feb 15 11:05:58.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:05:58.810: INFO: namespace: e2e-tests-daemonsets-r4twp, resource: bindings, ignored listing per whitelist Feb 15 11:05:58.966: INFO: namespace e2e-tests-daemonsets-r4twp deletion completed in 6.250867878s • [SLOW TEST:48.272 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:05:58.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 11:05:59.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25891320-4fe3-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-jqmz6" to be "success or failure" Feb 15 11:05:59.466: INFO: Pod "downwardapi-volume-25891320-4fe3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 28.330994ms Feb 15 11:06:02.097: INFO: Pod "downwardapi-volume-25891320-4fe3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.659757944s Feb 15 11:06:04.111: INFO: Pod "downwardapi-volume-25891320-4fe3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.673712682s Feb 15 11:06:06.927: INFO: Pod "downwardapi-volume-25891320-4fe3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.489037208s Feb 15 11:06:08.940: INFO: Pod "downwardapi-volume-25891320-4fe3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.501997589s Feb 15 11:06:10.956: INFO: Pod "downwardapi-volume-25891320-4fe3-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.518193617s STEP: Saw pod success Feb 15 11:06:10.956: INFO: Pod "downwardapi-volume-25891320-4fe3-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:06:10.962: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-25891320-4fe3-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 11:06:11.153: INFO: Waiting for pod downwardapi-volume-25891320-4fe3-11ea-960a-0242ac110007 to disappear Feb 15 11:06:11.164: INFO: Pod downwardapi-volume-25891320-4fe3-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:06:11.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jqmz6" for this suite. Feb 15 11:06:17.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:06:17.256: INFO: namespace: e2e-tests-downward-api-jqmz6, resource: bindings, ignored listing per whitelist Feb 15 11:06:17.441: INFO: namespace e2e-tests-downward-api-jqmz6 deletion completed in 6.269929731s • [SLOW TEST:18.474 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:06:17.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-3067a215-4fe3-11ea-960a-0242ac110007 STEP: Creating secret with name s-test-opt-upd-3067a2d5-4fe3-11ea-960a-0242ac110007 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3067a215-4fe3-11ea-960a-0242ac110007 STEP: Updating secret s-test-opt-upd-3067a2d5-4fe3-11ea-960a-0242ac110007 STEP: Creating secret with name s-test-opt-create-3067a305-4fe3-11ea-960a-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:07:40.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lqd7c" for this suite. Feb 15 11:08:04.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:08:04.407: INFO: namespace: e2e-tests-secrets-lqd7c, resource: bindings, ignored listing per whitelist Feb 15 11:08:04.470: INFO: namespace e2e-tests-secrets-lqd7c deletion completed in 24.202201011s • [SLOW TEST:107.028 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:08:04.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-pq7f8 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 15 11:08:04.718: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 15 11:08:41.210: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-pq7f8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 11:08:41.211: INFO: >>> kubeConfig: /root/.kube/config I0215 11:08:41.312974 8 log.go:172] (0xc00099d8c0) (0xc000c77360) Create stream I0215 11:08:41.313166 8 log.go:172] (0xc00099d8c0) (0xc000c77360) Stream added, broadcasting: 1 I0215 11:08:41.322976 8 log.go:172] (0xc00099d8c0) Reply frame received for 1 I0215 11:08:41.323316 8 log.go:172] (0xc00099d8c0) (0xc00191a500) Create stream I0215 11:08:41.323366 8 log.go:172] (0xc00099d8c0) (0xc00191a500) Stream added, broadcasting: 3 I0215 11:08:41.326258 8 log.go:172] (0xc00099d8c0) Reply frame received for 3 I0215 11:08:41.326391 8 log.go:172] (0xc00099d8c0) (0xc000c77400) Create stream I0215 11:08:41.326439 8 log.go:172] (0xc00099d8c0) (0xc000c77400) Stream added, broadcasting: 5 I0215 11:08:41.328576 8 log.go:172] (0xc00099d8c0) Reply frame received for 5 I0215 11:08:41.498243 8 log.go:172] (0xc00099d8c0) Data frame received for 3 I0215 11:08:41.498403 8 log.go:172] (0xc00191a500) (3) Data frame handling I0215 11:08:41.498430 8 log.go:172] (0xc00191a500) (3) Data frame sent I0215 11:08:41.702349 8 log.go:172] (0xc00099d8c0) (0xc000c77400) Stream removed, broadcasting: 5 I0215 11:08:41.703594 8 log.go:172] (0xc00099d8c0) Data frame received for 1 I0215 11:08:41.703728 8 log.go:172] (0xc00099d8c0) (0xc00191a500) Stream removed, broadcasting: 3 I0215 11:08:41.703789 8 log.go:172] (0xc000c77360) (1) Data frame handling I0215 11:08:41.703836 8 log.go:172] (0xc000c77360) (1) Data frame sent I0215 11:08:41.703859 8 log.go:172] (0xc00099d8c0) (0xc000c77360) Stream removed, broadcasting: 1 I0215 11:08:41.703895 8 log.go:172] (0xc00099d8c0) Go away received I0215 11:08:41.704405 8 log.go:172] (0xc00099d8c0) (0xc000c77360) Stream removed, broadcasting: 1 I0215 11:08:41.704441 8 log.go:172] (0xc00099d8c0) (0xc00191a500) Stream removed, broadcasting: 3 I0215 11:08:41.704459 8 log.go:172] (0xc00099d8c0) (0xc000c77400) Stream removed, broadcasting: 5 Feb 15 11:08:41.704: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:08:41.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-pq7f8" for this suite. Feb 15 11:09:05.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:09:06.054: INFO: namespace: e2e-tests-pod-network-test-pq7f8, resource: bindings, ignored listing per whitelist Feb 15 11:09:06.054: INFO: namespace e2e-tests-pod-network-test-pq7f8 deletion completed in 24.327640234s • [SLOW TEST:61.583 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:09:06.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 15 11:09:21.573: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:09:22.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-kvpr2" for this suite. Feb 15 11:09:49.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:09:49.171: INFO: namespace: e2e-tests-replicaset-kvpr2, resource: bindings, ignored listing per whitelist Feb 15 11:09:49.386: INFO: namespace e2e-tests-replicaset-kvpr2 deletion completed in 26.721561431s • [SLOW TEST:43.331 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:09:49.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 15 11:09:49.675: INFO: Waiting up to 5m0s for pod "pod-aec3815b-4fe3-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-wj7zs" to be "success or failure" Feb 15 11:09:49.686: INFO: Pod "pod-aec3815b-4fe3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.165231ms Feb 15 11:09:51.924: INFO: Pod "pod-aec3815b-4fe3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248150639s Feb 15 11:09:53.941: INFO: Pod "pod-aec3815b-4fe3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265045764s Feb 15 11:09:56.149: INFO: Pod "pod-aec3815b-4fe3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.472864957s Feb 15 11:09:58.186: INFO: Pod "pod-aec3815b-4fe3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.510687954s Feb 15 11:10:00.206: INFO: Pod "pod-aec3815b-4fe3-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.530015361s STEP: Saw pod success Feb 15 11:10:00.206: INFO: Pod "pod-aec3815b-4fe3-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:10:00.217: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-aec3815b-4fe3-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 11:10:00.460: INFO: Waiting for pod pod-aec3815b-4fe3-11ea-960a-0242ac110007 to disappear Feb 15 11:10:00.474: INFO: Pod pod-aec3815b-4fe3-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:10:00.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wj7zs" for this suite. Feb 15 11:10:08.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:10:08.699: INFO: namespace: e2e-tests-emptydir-wj7zs, resource: bindings, ignored listing per whitelist Feb 15 11:10:08.743: INFO: namespace e2e-tests-emptydir-wj7zs deletion completed in 8.260757964s • [SLOW TEST:19.357 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:10:08.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 15 11:10:24.909: INFO: Successfully updated pod "annotationupdatebb0d013f-4fe3-11ea-960a-0242ac110007" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:10:27.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-k47zw" for this suite. Feb 15 11:10:51.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:10:51.173: INFO: namespace: e2e-tests-downward-api-k47zw, resource: bindings, ignored listing per whitelist Feb 15 11:10:51.254: INFO: namespace e2e-tests-downward-api-k47zw deletion completed in 24.230033988s • [SLOW TEST:42.510 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:10:51.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-d4t4 STEP: Creating a pod to test atomic-volume-subpath Feb 15 11:10:51.683: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d4t4" in namespace "e2e-tests-subpath-7qv9d" to be "success or failure" Feb 15 11:10:51.708: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Pending", Reason="", readiness=false. Elapsed: 25.309383ms Feb 15 11:10:53.743: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059896612s Feb 15 11:10:55.773: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089456869s Feb 15 11:10:58.504: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.820352825s Feb 15 11:11:00.560: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.876842012s Feb 15 11:11:02.580: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.896667739s Feb 15 11:11:04.603: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.919515168s Feb 15 11:11:06.643: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.960020999s Feb 15 11:11:08.652: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.969280104s Feb 15 11:11:10.666: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Running", Reason="", readiness=false. Elapsed: 18.983069048s Feb 15 11:11:12.706: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Running", Reason="", readiness=false. Elapsed: 21.022545779s Feb 15 11:11:14.723: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Running", Reason="", readiness=false. Elapsed: 23.03965349s Feb 15 11:11:16.737: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Running", Reason="", readiness=false. Elapsed: 25.053689598s Feb 15 11:11:18.765: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Running", Reason="", readiness=false. Elapsed: 27.081632401s Feb 15 11:11:20.775: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Running", Reason="", readiness=false. Elapsed: 29.092205083s Feb 15 11:11:22.803: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Running", Reason="", readiness=false. Elapsed: 31.120136152s Feb 15 11:11:24.827: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Running", Reason="", readiness=false. Elapsed: 33.143606529s Feb 15 11:11:26.884: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Running", Reason="", readiness=false. Elapsed: 35.200599669s Feb 15 11:11:29.448: INFO: Pod "pod-subpath-test-configmap-d4t4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.764794594s STEP: Saw pod success Feb 15 11:11:29.448: INFO: Pod "pod-subpath-test-configmap-d4t4" satisfied condition "success or failure" Feb 15 11:11:29.456: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-d4t4 container test-container-subpath-configmap-d4t4: STEP: delete the pod Feb 15 11:11:29.911: INFO: Waiting for pod pod-subpath-test-configmap-d4t4 to disappear Feb 15 11:11:29.928: INFO: Pod pod-subpath-test-configmap-d4t4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-d4t4 Feb 15 11:11:29.929: INFO: Deleting pod "pod-subpath-test-configmap-d4t4" in namespace "e2e-tests-subpath-7qv9d" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:11:29.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-7qv9d" for this suite. Feb 15 11:11:36.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:11:36.161: INFO: namespace: e2e-tests-subpath-7qv9d, resource: bindings, ignored listing per whitelist Feb 15 11:11:36.231: INFO: namespace e2e-tests-subpath-7qv9d deletion completed in 6.260559296s • [SLOW TEST:44.976 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:11:36.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xrs8x Feb 15 11:11:48.621: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xrs8x STEP: checking the pod's current state and verifying that restartCount is present Feb 15 11:11:48.635: INFO: Initial restart count of pod liveness-http is 0 Feb 15 11:12:05.075: INFO: Restart count of pod e2e-tests-container-probe-xrs8x/liveness-http is now 1 (16.439824836s elapsed) Feb 15 11:12:25.562: INFO: Restart count of pod e2e-tests-container-probe-xrs8x/liveness-http is now 2 (36.926761374s elapsed) Feb 15 11:12:45.596: INFO: Restart count of pod e2e-tests-container-probe-xrs8x/liveness-http is now 3 (56.960769912s elapsed) Feb 15 11:13:05.885: INFO: Restart count of pod e2e-tests-container-probe-xrs8x/liveness-http is now 4 (1m17.250586726s elapsed) Feb 15 11:14:06.682: INFO: Restart count of pod e2e-tests-container-probe-xrs8x/liveness-http is now 5 (2m18.047147054s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:14:06.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xrs8x" for this suite. Feb 15 11:14:12.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:14:13.260: INFO: namespace: e2e-tests-container-probe-xrs8x, resource: bindings, ignored listing per whitelist Feb 15 11:14:13.311: INFO: namespace e2e-tests-container-probe-xrs8x deletion completed in 6.56845004s • [SLOW TEST:157.080 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:14:13.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 11:14:47.672: INFO: Container started at 2020-02-15 11:14:22 +0000 UTC, pod became ready at 2020-02-15 11:14:46 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:14:47.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-j8txv" for this suite. Feb 15 11:15:11.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:15:11.879: INFO: namespace: e2e-tests-container-probe-j8txv, resource: bindings, ignored listing per whitelist Feb 15 11:15:11.892: INFO: namespace e2e-tests-container-probe-j8txv deletion completed in 24.210660505s • [SLOW TEST:58.581 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:15:11.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 11:15:12.146: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 15 11:15:12.343: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 15 11:15:17.386: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 15 11:15:23.435: INFO: Creating deployment "test-rolling-update-deployment" Feb 15 11:15:23.480: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 15 11:15:23.501: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 15 11:15:25.524: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 15 11:15:25.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362124, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 11:15:27.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362124, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 11:15:29.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362124, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 11:15:31.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362124, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 11:15:33.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362133, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717362123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 11:15:35.581: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 15 11:15:35.725: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-x4mlx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x4mlx/deployments/test-rolling-update-deployment,UID:75b9d79f-4fe4-11ea-a994-fa163e34d433,ResourceVersion:21745768,Generation:1,CreationTimestamp:2020-02-15 11:15:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-15 11:15:23 +0000 UTC 2020-02-15 11:15:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-15 11:15:33 +0000 UTC 2020-02-15 11:15:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 15 11:15:35.735: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-x4mlx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x4mlx/replicasets/test-rolling-update-deployment-75db98fb4c,UID:75d03c6c-4fe4-11ea-a994-fa163e34d433,ResourceVersion:21745759,Generation:1,CreationTimestamp:2020-02-15 11:15:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 75b9d79f-4fe4-11ea-a994-fa163e34d433 0xc0015c8457 0xc0015c8458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 15 11:15:35.735: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 15 11:15:35.736: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-x4mlx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x4mlx/replicasets/test-rolling-update-controller,UID:6efee3b2-4fe4-11ea-a994-fa163e34d433,ResourceVersion:21745767,Generation:2,CreationTimestamp:2020-02-15 11:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 75b9d79f-4fe4-11ea-a994-fa163e34d433 0xc0015c8397 0xc0015c8398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 15 11:15:35.751: INFO: Pod "test-rolling-update-deployment-75db98fb4c-w46z2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-w46z2,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-x4mlx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x4mlx/pods/test-rolling-update-deployment-75db98fb4c-w46z2,UID:75db1ba5-4fe4-11ea-a994-fa163e34d433,ResourceVersion:21745758,Generation:0,CreationTimestamp:2020-02-15 11:15:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 75d03c6c-4fe4-11ea-a994-fa163e34d433 0xc000cc0267 0xc000cc0268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2595s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2595s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-2595s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000cc0350} {node.kubernetes.io/unreachable Exists NoExecute 0xc000cc0370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:15:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:15:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:15:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:15:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-15 11:15:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-15 11:15:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://691eafc66fdf917c3b23ee3e275e8942f0cc60db27264442f83d1f9b065b909b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:15:35.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-x4mlx" for this suite. Feb 15 11:15:43.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:15:45.053: INFO: namespace: e2e-tests-deployment-x4mlx, resource: bindings, ignored listing per whitelist Feb 15 11:15:45.212: INFO: namespace e2e-tests-deployment-x4mlx deletion completed in 9.441900019s • [SLOW TEST:33.320 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:15:45.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-gz786/secret-test-84b7eb11-4fe4-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 11:15:48.760: INFO: Waiting up to 5m0s for pod "pod-configmaps-84ccadef-4fe4-11ea-960a-0242ac110007" in namespace "e2e-tests-secrets-gz786" to be "success or failure" Feb 15 11:15:48.819: INFO: Pod "pod-configmaps-84ccadef-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 58.986856ms Feb 15 11:15:51.014: INFO: Pod "pod-configmaps-84ccadef-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253434052s Feb 15 11:15:53.040: INFO: Pod "pod-configmaps-84ccadef-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.279133839s Feb 15 11:15:55.344: INFO: Pod "pod-configmaps-84ccadef-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584091764s Feb 15 11:15:58.352: INFO: Pod "pod-configmaps-84ccadef-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.591194216s Feb 15 11:16:00.394: INFO: Pod "pod-configmaps-84ccadef-4fe4-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.633841836s STEP: Saw pod success Feb 15 11:16:00.394: INFO: Pod "pod-configmaps-84ccadef-4fe4-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:16:00.400: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-84ccadef-4fe4-11ea-960a-0242ac110007 container env-test: STEP: delete the pod Feb 15 11:16:00.585: INFO: Waiting for pod pod-configmaps-84ccadef-4fe4-11ea-960a-0242ac110007 to disappear Feb 15 11:16:00.618: INFO: Pod pod-configmaps-84ccadef-4fe4-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:16:00.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gz786" for this suite. Feb 15 11:16:06.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:16:06.914: INFO: namespace: e2e-tests-secrets-gz786, resource: bindings, ignored listing per whitelist Feb 15 11:16:06.987: INFO: namespace e2e-tests-secrets-gz786 deletion completed in 6.280726812s • [SLOW TEST:21.774 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:16:06.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:17:07.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-42dsp" for this suite. Feb 15 11:17:29.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:17:29.330: INFO: namespace: e2e-tests-container-probe-42dsp, resource: bindings, ignored listing per whitelist Feb 15 11:17:29.430: INFO: namespace e2e-tests-container-probe-42dsp deletion completed in 22.19911413s • [SLOW TEST:82.443 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:17:29.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 15 11:17:29.757: INFO: Waiting up to 5m0s for pod "downward-api-c100d2e6-4fe4-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-g9z2l" to be "success or failure" Feb 15 11:17:29.766: INFO: Pod "downward-api-c100d2e6-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533386ms Feb 15 11:17:31.903: INFO: Pod "downward-api-c100d2e6-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145976995s Feb 15 11:17:33.927: INFO: Pod "downward-api-c100d2e6-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170101905s Feb 15 11:17:35.956: INFO: Pod "downward-api-c100d2e6-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.199196853s Feb 15 11:17:37.995: INFO: Pod "downward-api-c100d2e6-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.238162596s Feb 15 11:17:40.015: INFO: Pod "downward-api-c100d2e6-4fe4-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.258185689s STEP: Saw pod success Feb 15 11:17:40.016: INFO: Pod "downward-api-c100d2e6-4fe4-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:17:40.029: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c100d2e6-4fe4-11ea-960a-0242ac110007 container dapi-container: STEP: delete the pod Feb 15 11:17:40.783: INFO: Waiting for pod downward-api-c100d2e6-4fe4-11ea-960a-0242ac110007 to disappear Feb 15 11:17:41.047: INFO: Pod downward-api-c100d2e6-4fe4-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:17:41.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g9z2l" for this suite. Feb 15 11:17:47.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:17:47.246: INFO: namespace: e2e-tests-downward-api-g9z2l, resource: bindings, ignored listing per whitelist Feb 15 11:17:47.396: INFO: namespace e2e-tests-downward-api-g9z2l deletion completed in 6.331258653s • [SLOW TEST:17.965 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:17:47.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Feb 15 11:17:47.581: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-rdzt5" to be "success or failure" Feb 15 11:17:47.672: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 91.50397ms Feb 15 11:17:49.897: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315780002s Feb 15 11:17:51.920: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339203491s Feb 15 11:17:55.325: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.744507727s Feb 15 11:17:58.212: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.63122148s Feb 15 11:18:00.224: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.643065212s Feb 15 11:18:02.247: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.666156259s STEP: Saw pod success Feb 15 11:18:02.247: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 15 11:18:02.253: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 15 11:18:03.126: INFO: Waiting for pod pod-host-path-test to disappear Feb 15 11:18:03.234: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:18:03.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-rdzt5" for this suite. Feb 15 11:18:09.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:18:09.365: INFO: namespace: e2e-tests-hostpath-rdzt5, resource: bindings, ignored listing per whitelist Feb 15 11:18:09.511: INFO: namespace e2e-tests-hostpath-rdzt5 deletion completed in 6.266672626s • [SLOW TEST:22.114 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:18:09.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-5vv7q STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5vv7q to expose endpoints map[] Feb 15 11:18:09.737: INFO: Get endpoints failed (6.161338ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 15 11:18:10.752: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5vv7q exposes endpoints map[] (1.020548402s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-5vv7q STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5vv7q to expose endpoints map[pod1:[80]] Feb 15 11:18:15.532: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.744717165s elapsed, will retry) Feb 15 11:18:20.951: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5vv7q exposes endpoints map[pod1:[80]] (10.163172002s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-5vv7q STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5vv7q to expose endpoints map[pod1:[80] pod2:[80]] Feb 15 11:18:25.924: INFO: Unexpected endpoints: found map[d9745984-4fe4-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.964307069s elapsed, will retry) Feb 15 11:18:31.334: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5vv7q exposes endpoints map[pod1:[80] pod2:[80]] (10.374541747s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-5vv7q STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5vv7q to expose endpoints map[pod2:[80]] Feb 15 11:18:32.617: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5vv7q exposes endpoints map[pod2:[80]] (1.263845152s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-5vv7q STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5vv7q to expose endpoints map[] Feb 15 11:18:33.984: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5vv7q exposes endpoints map[] (1.360850413s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:18:34.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-5vv7q" for this suite. Feb 15 11:18:57.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:18:57.197: INFO: namespace: e2e-tests-services-5vv7q, resource: bindings, ignored listing per whitelist Feb 15 11:18:57.247: INFO: namespace e2e-tests-services-5vv7q deletion completed in 22.737405102s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:47.736 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:18:57.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-f55e7c4e-4fe4-11ea-960a-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 15 11:18:57.644: INFO: Waiting up to 5m0s for pod "pod-configmaps-f55f69ee-4fe4-11ea-960a-0242ac110007" in namespace "e2e-tests-configmap-hxwqj" to be "success or failure" Feb 15 11:18:57.660: INFO: Pod "pod-configmaps-f55f69ee-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.160211ms Feb 15 11:18:59.785: INFO: Pod "pod-configmaps-f55f69ee-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14039105s Feb 15 11:19:01.807: INFO: Pod "pod-configmaps-f55f69ee-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162527939s Feb 15 11:19:04.108: INFO: Pod "pod-configmaps-f55f69ee-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.463173018s Feb 15 11:19:06.387: INFO: Pod "pod-configmaps-f55f69ee-4fe4-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.742044011s Feb 15 11:19:08.405: INFO: Pod "pod-configmaps-f55f69ee-4fe4-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.760251062s STEP: Saw pod success Feb 15 11:19:08.405: INFO: Pod "pod-configmaps-f55f69ee-4fe4-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:19:08.409: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f55f69ee-4fe4-11ea-960a-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 15 11:19:09.031: INFO: Waiting for pod pod-configmaps-f55f69ee-4fe4-11ea-960a-0242ac110007 to disappear Feb 15 11:19:09.055: INFO: Pod pod-configmaps-f55f69ee-4fe4-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:19:09.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hxwqj" for this suite. Feb 15 11:19:15.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:19:15.463: INFO: namespace: e2e-tests-configmap-hxwqj, resource: bindings, ignored listing per whitelist Feb 15 11:19:15.670: INFO: namespace e2e-tests-configmap-hxwqj deletion completed in 6.598982527s • [SLOW TEST:18.423 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:19:15.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 11:19:15.987: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 15 11:19:16.009: INFO: Number of nodes with available pods: 0 Feb 15 11:19:16.009: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:17.338: INFO: Number of nodes with available pods: 0 Feb 15 11:19:17.338: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:18.035: INFO: Number of nodes with available pods: 0 Feb 15 11:19:18.036: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:19.035: INFO: Number of nodes with available pods: 0 Feb 15 11:19:19.035: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:20.069: INFO: Number of nodes with available pods: 0 Feb 15 11:19:20.069: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:21.026: INFO: Number of nodes with available pods: 0 Feb 15 11:19:21.027: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:22.253: INFO: Number of nodes with available pods: 0 Feb 15 11:19:22.253: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:23.336: INFO: Number of nodes with available pods: 0 Feb 15 11:19:23.336: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:24.065: INFO: Number of nodes with available pods: 0 Feb 15 11:19:24.066: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:25.032: INFO: Number of nodes with available pods: 0 Feb 15 11:19:25.032: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:26.032: INFO: Number of nodes with available pods: 1 Feb 15 11:19:26.032: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 15 11:19:26.262: INFO: Wrong image for pod: daemon-set-l48r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 11:19:27.314: INFO: Wrong image for pod: daemon-set-l48r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 11:19:28.307: INFO: Wrong image for pod: daemon-set-l48r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 11:19:29.397: INFO: Wrong image for pod: daemon-set-l48r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 11:19:30.658: INFO: Wrong image for pod: daemon-set-l48r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 11:19:31.309: INFO: Wrong image for pod: daemon-set-l48r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 11:19:32.314: INFO: Wrong image for pod: daemon-set-l48r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 11:19:32.314: INFO: Pod daemon-set-l48r9 is not available Feb 15 11:19:33.376: INFO: Pod daemon-set-qpq85 is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 15 11:19:33.420: INFO: Number of nodes with available pods: 0 Feb 15 11:19:33.420: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:35.019: INFO: Number of nodes with available pods: 0 Feb 15 11:19:35.020: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:35.593: INFO: Number of nodes with available pods: 0 Feb 15 11:19:35.593: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:36.441: INFO: Number of nodes with available pods: 0 Feb 15 11:19:36.442: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:37.458: INFO: Number of nodes with available pods: 0 Feb 15 11:19:37.458: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:39.106: INFO: Number of nodes with available pods: 0 Feb 15 11:19:39.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:39.607: INFO: Number of nodes with available pods: 0 Feb 15 11:19:39.607: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:40.463: INFO: Number of nodes with available pods: 0 Feb 15 11:19:40.464: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:41.446: INFO: Number of nodes with available pods: 0 Feb 15 11:19:41.446: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:19:42.467: INFO: Number of nodes with available pods: 1 Feb 15 11:19:42.467: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-8crqp, will wait for the garbage collector to delete the pods Feb 15 11:19:42.636: INFO: Deleting DaemonSet.extensions daemon-set took: 40.734062ms Feb 15 11:19:43.637: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000859933s Feb 15 11:20:02.698: INFO: Number of nodes with available pods: 0 Feb 15 11:20:02.698: INFO: Number of running nodes: 0, number of available pods: 0 Feb 15 11:20:02.762: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8crqp/daemonsets","resourceVersion":"21746340"},"items":null} Feb 15 11:20:02.774: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8crqp/pods","resourceVersion":"21746340"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:20:02.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-8crqp" for this suite. Feb 15 11:20:10.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:20:10.884: INFO: namespace: e2e-tests-daemonsets-8crqp, resource: bindings, ignored listing per whitelist Feb 15 11:20:11.040: INFO: namespace e2e-tests-daemonsets-8crqp deletion completed in 8.2308747s • [SLOW TEST:55.370 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:20:11.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-21445a2c-4fe5-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 11:20:11.265: INFO: Waiting up to 5m0s for pod "pod-secrets-2144f7d7-4fe5-11ea-960a-0242ac110007" in namespace "e2e-tests-secrets-l4mxv" to be "success or failure" Feb 15 11:20:11.371: INFO: Pod "pod-secrets-2144f7d7-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 106.424996ms Feb 15 11:20:13.391: INFO: Pod "pod-secrets-2144f7d7-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12580524s Feb 15 11:20:15.406: INFO: Pod "pod-secrets-2144f7d7-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140893395s Feb 15 11:20:18.030: INFO: Pod "pod-secrets-2144f7d7-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.765615s Feb 15 11:20:20.050: INFO: Pod "pod-secrets-2144f7d7-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.784814011s Feb 15 11:20:22.063: INFO: Pod "pod-secrets-2144f7d7-4fe5-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.79828649s STEP: Saw pod success Feb 15 11:20:22.063: INFO: Pod "pod-secrets-2144f7d7-4fe5-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:20:22.069: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2144f7d7-4fe5-11ea-960a-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 15 11:20:22.976: INFO: Waiting for pod pod-secrets-2144f7d7-4fe5-11ea-960a-0242ac110007 to disappear Feb 15 11:20:22.989: INFO: Pod pod-secrets-2144f7d7-4fe5-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:20:22.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-l4mxv" for this suite. Feb 15 11:20:29.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:20:29.216: INFO: namespace: e2e-tests-secrets-l4mxv, resource: bindings, ignored listing per whitelist Feb 15 11:20:29.303: INFO: namespace e2e-tests-secrets-l4mxv deletion completed in 6.305705542s • [SLOW TEST:18.262 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:20:29.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 11:20:29.451: INFO: Creating deployment "nginx-deployment" Feb 15 11:20:29.616: INFO: Waiting for observed generation 1 Feb 15 11:20:33.083: INFO: Waiting for all required pods to come up Feb 15 11:20:33.767: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 15 11:21:18.617: INFO: Waiting for deployment "nginx-deployment" to complete Feb 15 11:21:18.665: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 15 11:21:18.682: INFO: Updating deployment nginx-deployment Feb 15 11:21:18.682: INFO: Waiting for observed generation 2 Feb 15 11:21:22.263: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 15 11:21:22.277: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 15 11:21:23.023: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 15 11:21:23.370: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 15 11:21:23.371: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 15 11:21:23.460: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 15 11:21:23.535: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 15 11:21:23.535: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 15 11:21:26.675: INFO: Updating deployment nginx-deployment Feb 15 11:21:26.676: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 15 11:21:28.823: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 15 11:21:37.364: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 15 11:21:38.085: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-db85t,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-db85t/deployments/nginx-deployment,UID:2c1ed3c4-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746654,Generation:3,CreationTimestamp:2020-02-15 11:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-15 11:21:20 +0000 UTC 2020-02-15 11:20:29 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-15 11:21:28 +0000 UTC 2020-02-15 11:21:28 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 15 11:21:38.788: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-db85t,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-db85t/replicasets/nginx-deployment-5c98f8fb5,UID:4979d5d4-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746711,Generation:3,CreationTimestamp:2020-02-15 11:21:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2c1ed3c4-4fe5-11ea-a994-fa163e34d433 0xc001c67ad7 0xc001c67ad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 15 11:21:38.788: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 15 11:21:38.788: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-db85t,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-db85t/replicasets/nginx-deployment-85ddf47c5d,UID:2c3af5e2-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746707,Generation:3,CreationTimestamp:2020-02-15 11:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2c1ed3c4-4fe5-11ea-a994-fa163e34d433 0xc001c67b97 0xc001c67b98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 15 11:21:40.268: INFO: Pod "nginx-deployment-5c98f8fb5-bfmn6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bfmn6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-bfmn6,UID:49e8b7d3-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746636,Generation:0,CreationTimestamp:2020-02-15 11:21:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00224b8b0 0xc00224b8b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00224b920} {node.kubernetes.io/unreachable Exists NoExecute 0xc00224b940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-15 11:21:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.269: INFO: Pod "nginx-deployment-5c98f8fb5-c6vcw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-c6vcw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-c6vcw,UID:49a88165-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746627,Generation:0,CreationTimestamp:2020-02-15 11:21:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00224ba07 0xc00224ba08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00224ba70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00224ba90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-15 11:21:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.269: INFO: Pod "nginx-deployment-5c98f8fb5-g6hn9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-g6hn9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-g6hn9,UID:4f813b68-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746716,Generation:0,CreationTimestamp:2020-02-15 11:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00224bb57 0xc00224bb58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00224bbc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00224bbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-15 11:21:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.269: INFO: Pod "nginx-deployment-5c98f8fb5-jwpfr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jwpfr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-jwpfr,UID:5269d454-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746703,Generation:0,CreationTimestamp:2020-02-15 11:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00224bca7 0xc00224bca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00224bd10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00224bd30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.269: INFO: Pod "nginx-deployment-5c98f8fb5-mhtqz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mhtqz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-mhtqz,UID:50647d59-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746676,Generation:0,CreationTimestamp:2020-02-15 11:21:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00224bda7 0xc00224bda8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00224be10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00224be30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.269: INFO: Pod "nginx-deployment-5c98f8fb5-n56ts" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n56ts,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-n56ts,UID:526a3c9a-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746695,Generation:0,CreationTimestamp:2020-02-15 11:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00224bea7 0xc00224bea8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00224bf10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00224bf30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.269: INFO: Pod "nginx-deployment-5c98f8fb5-n5scg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n5scg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-n5scg,UID:526a69b0-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746696,Generation:0,CreationTimestamp:2020-02-15 11:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00224bfa7 0xc00224bfa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226e010} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226e030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.270: INFO: Pod "nginx-deployment-5c98f8fb5-sj2g9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sj2g9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-sj2g9,UID:526a64a1-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746701,Generation:0,CreationTimestamp:2020-02-15 11:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00226e0a7 0xc00226e0a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226e110} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226e130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.270: INFO: Pod "nginx-deployment-5c98f8fb5-t4gnv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-t4gnv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-t4gnv,UID:49a82dc2-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746631,Generation:0,CreationTimestamp:2020-02-15 11:21:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00226e1a7 0xc00226e1a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226e210} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226e230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-15 11:21:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.270: INFO: Pod "nginx-deployment-5c98f8fb5-v92pl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v92pl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-v92pl,UID:49a43d5a-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746616,Generation:0,CreationTimestamp:2020-02-15 11:21:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00226e2f7 0xc00226e2f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226e360} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226e380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-15 11:21:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.270: INFO: Pod "nginx-deployment-5c98f8fb5-wtm66" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wtm66,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-wtm66,UID:52b4e68a-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746706,Generation:0,CreationTimestamp:2020-02-15 11:21:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00226e447 0xc00226e448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226e4b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226e4d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.270: INFO: Pod "nginx-deployment-5c98f8fb5-x9srv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-x9srv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-x9srv,UID:49f0a34d-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746634,Generation:0,CreationTimestamp:2020-02-15 11:21:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00226e547 0xc00226e548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226e5c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226e5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-15 11:21:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.271: INFO: Pod "nginx-deployment-5c98f8fb5-xdxlk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xdxlk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-5c98f8fb5-xdxlk,UID:5060cbb6-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746677,Generation:0,CreationTimestamp:2020-02-15 11:21:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4979d5d4-4fe5-11ea-a994-fa163e34d433 0xc00226e6a7 0xc00226e6a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226e710} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226e730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.271: INFO: Pod "nginx-deployment-85ddf47c5d-46p89" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-46p89,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-46p89,UID:2c60c4a0-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746555,Generation:0,CreationTimestamp:2020-02-15 11:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226e7a7 0xc00226e7a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226e810} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226e830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-15 11:20:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 11:21:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fceed074b2c35ad0814014cb13dd4546500adcbc8d1390f7e48438318f1fc757}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.271: INFO: Pod "nginx-deployment-85ddf47c5d-488nj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-488nj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-488nj,UID:526a4ddb-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746700,Generation:0,CreationTimestamp:2020-02-15 11:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226e8f7 0xc00226e8f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226e970} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226e990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.271: INFO: Pod "nginx-deployment-85ddf47c5d-4ns8q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4ns8q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-4ns8q,UID:2c5381f0-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746524,Generation:0,CreationTimestamp:2020-02-15 11:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226ea07 0xc00226ea08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226ea70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226ea90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-15 11:20:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 11:21:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1ee8445d0dd70cf4373d6d7eb8abc3d3613135ed2d3d8523e887bf7bba64ebcb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.271: INFO: Pod "nginx-deployment-85ddf47c5d-7qs9k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7qs9k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-7qs9k,UID:4e8f65fa-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746662,Generation:0,CreationTimestamp:2020-02-15 11:21:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226ed07 0xc00226ed08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226ed70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226ed90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-15 11:21:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.272: INFO: Pod "nginx-deployment-85ddf47c5d-8qtbb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8qtbb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-8qtbb,UID:526a065a-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746692,Generation:0,CreationTimestamp:2020-02-15 11:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226ee47 0xc00226ee48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226eeb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226eed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.272: INFO: Pod "nginx-deployment-85ddf47c5d-9b9qs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9b9qs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-9b9qs,UID:2c55c8b4-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746552,Generation:0,CreationTimestamp:2020-02-15 11:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226ef47 0xc00226ef48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226efb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226efd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-15 11:20:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 11:21:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://35e62960204d97dab38dbf30ea232435b3e840cc10649ae6390667ccef684c7f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.272: INFO: Pod "nginx-deployment-85ddf47c5d-c7ckl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c7ckl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-c7ckl,UID:4f833864-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746724,Generation:0,CreationTimestamp:2020-02-15 11:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226f097 0xc00226f098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226f100} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226f120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-15 11:21:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.272: INFO: Pod "nginx-deployment-85ddf47c5d-cpv79" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cpv79,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-cpv79,UID:526ab6f3-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746694,Generation:0,CreationTimestamp:2020-02-15 11:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226f1d7 0xc00226f1d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226f240} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226f260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.272: INFO: Pod "nginx-deployment-85ddf47c5d-flsvf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-flsvf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-flsvf,UID:2c60bade-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746565,Generation:0,CreationTimestamp:2020-02-15 11:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226f2d7 0xc00226f2d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226f340} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226f360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-15 11:20:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 11:21:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7ca89a904d65fe31f34f8a41edb932aa3e2563ae3b22a5eff97fca88ccff65a0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.272: INFO: Pod "nginx-deployment-85ddf47c5d-fs9sk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fs9sk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-fs9sk,UID:505ca413-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746675,Generation:0,CreationTimestamp:2020-02-15 11:21:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226f427 0xc00226f428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226f490} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226f4b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.272: INFO: Pod "nginx-deployment-85ddf47c5d-hw7qj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hw7qj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-hw7qj,UID:5069bc4e-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746678,Generation:0,CreationTimestamp:2020-02-15 11:21:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226f527 0xc00226f528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226f590} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226f5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.273: INFO: Pod "nginx-deployment-85ddf47c5d-k5m94" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-k5m94,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-k5m94,UID:2c605ac3-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746570,Generation:0,CreationTimestamp:2020-02-15 11:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226f627 0xc00226f628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226f690} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226f6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-15 11:20:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 11:21:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9f94aa5d97256453657c13e8e7d8cbce3d354dccd1acc2ce5ed6c921d93c792c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.273: INFO: Pod "nginx-deployment-85ddf47c5d-l5k8b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l5k8b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-l5k8b,UID:505c2476-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746670,Generation:0,CreationTimestamp:2020-02-15 11:21:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226f777 0xc00226f778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226f7e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226f800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.273: INFO: Pod "nginx-deployment-85ddf47c5d-mpfrb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mpfrb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-mpfrb,UID:2c60a9a0-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746542,Generation:0,CreationTimestamp:2020-02-15 11:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226f877 0xc00226f878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226f8e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226f900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-15 11:20:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 11:21:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c5dfca8c824b38c5f9389f7a11c6a59c79267e39bdd3f01ffd807a5513849b12}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.273: INFO: Pod "nginx-deployment-85ddf47c5d-qg2kj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qg2kj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-qg2kj,UID:526a36e7-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746697,Generation:0,CreationTimestamp:2020-02-15 11:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226f9c7 0xc00226f9c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226fa30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226fa50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.273: INFO: Pod "nginx-deployment-85ddf47c5d-t2nc2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t2nc2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-t2nc2,UID:505c8883-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746674,Generation:0,CreationTimestamp:2020-02-15 11:21:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226fac7 0xc00226fac8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226fb30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226fb50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.273: INFO: Pod "nginx-deployment-85ddf47c5d-wgv29" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wgv29,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-wgv29,UID:2c57d9ee-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746558,Generation:0,CreationTimestamp:2020-02-15 11:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226fbc7 0xc00226fbc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226fc30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226fc50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-15 11:20:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 11:21:10 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://70d15a6c0306e05433d661abf6a859a5ca73a1cbe5df6bfdea4f0b6ed8e5022f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.273: INFO: Pod "nginx-deployment-85ddf47c5d-wt8tv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wt8tv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-wt8tv,UID:526ac724-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746699,Generation:0,CreationTimestamp:2020-02-15 11:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226fd17 0xc00226fd18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226fd80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226fda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.274: INFO: Pod "nginx-deployment-85ddf47c5d-xhmzd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xhmzd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-xhmzd,UID:4f831807-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746691,Generation:0,CreationTimestamp:2020-02-15 11:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226fe17 0xc00226fe18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226fe80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226fea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-15 11:21:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 11:21:40.274: INFO: Pod "nginx-deployment-85ddf47c5d-zsbtd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zsbtd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-db85t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-db85t/pods/nginx-deployment-85ddf47c5d-zsbtd,UID:2c76c1dc-4fe5-11ea-a994-fa163e34d433,ResourceVersion:21746574,Generation:0,CreationTimestamp:2020-02-15 11:20:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2c3af5e2-4fe5-11ea-a994-fa163e34d433 0xc00226ff57 0xc00226ff58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lp577 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lp577,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lp577 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226ffc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226ffe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:21:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:20:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-15 11:20:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 11:21:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://41b6a45235af0f1284a57183c6ce6304cd36737d99c4dd5a43b469a0ba8f7e23}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:21:40.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-db85t" for this suite. Feb 15 11:22:34.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:22:37.252: INFO: namespace: e2e-tests-deployment-db85t, resource: bindings, ignored listing per whitelist Feb 15 11:22:37.295: INFO: namespace e2e-tests-deployment-db85t deletion completed in 55.789114467s • [SLOW TEST:127.992 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:22:37.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 15 11:22:38.136: INFO: Waiting up to 5m0s for pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-dp2z7" to be "success or failure" Feb 15 11:22:38.610: INFO: Pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 473.872115ms Feb 15 11:22:40.636: INFO: Pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.499633182s Feb 15 11:22:42.650: INFO: Pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.513972038s Feb 15 11:22:44.680: INFO: Pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.543936992s Feb 15 11:22:46.704: INFO: Pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.568226212s Feb 15 11:22:48.728: INFO: Pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.592375839s Feb 15 11:22:50.754: INFO: Pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.618503609s Feb 15 11:22:53.220: INFO: Pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.084132992s Feb 15 11:22:55.236: INFO: Pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.100189977s Feb 15 11:22:57.250: INFO: Pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.113886528s STEP: Saw pod success Feb 15 11:22:57.250: INFO: Pod "downward-api-78d0005d-4fe5-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:22:57.301: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-78d0005d-4fe5-11ea-960a-0242ac110007 container dapi-container: STEP: delete the pod Feb 15 11:22:57.492: INFO: Waiting for pod downward-api-78d0005d-4fe5-11ea-960a-0242ac110007 to disappear Feb 15 11:22:57.530: INFO: Pod downward-api-78d0005d-4fe5-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:22:57.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dp2z7" for this suite. Feb 15 11:23:03.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:23:03.991: INFO: namespace: e2e-tests-downward-api-dp2z7, resource: bindings, ignored listing per whitelist Feb 15 11:23:04.008: INFO: namespace e2e-tests-downward-api-dp2z7 deletion completed in 6.444455578s • [SLOW TEST:26.712 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:23:04.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 11:23:04.305: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88684e7e-4fe5-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-k958v" to be "success or failure" Feb 15 11:23:04.350: INFO: Pod "downwardapi-volume-88684e7e-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 44.564531ms Feb 15 11:23:06.379: INFO: Pod "downwardapi-volume-88684e7e-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074003387s Feb 15 11:23:08.391: INFO: Pod "downwardapi-volume-88684e7e-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08593857s Feb 15 11:23:10.401: INFO: Pod "downwardapi-volume-88684e7e-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095437638s Feb 15 11:23:12.413: INFO: Pod "downwardapi-volume-88684e7e-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107581254s Feb 15 11:23:14.427: INFO: Pod "downwardapi-volume-88684e7e-4fe5-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122157744s STEP: Saw pod success Feb 15 11:23:14.427: INFO: Pod "downwardapi-volume-88684e7e-4fe5-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:23:14.431: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-88684e7e-4fe5-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 11:23:14.515: INFO: Waiting for pod downwardapi-volume-88684e7e-4fe5-11ea-960a-0242ac110007 to disappear Feb 15 11:23:14.546: INFO: Pod downwardapi-volume-88684e7e-4fe5-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:23:14.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-k958v" for this suite. Feb 15 11:23:20.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:23:20.736: INFO: namespace: e2e-tests-downward-api-k958v, resource: bindings, ignored listing per whitelist Feb 15 11:23:20.862: INFO: namespace e2e-tests-downward-api-k958v deletion completed in 6.303093639s • [SLOW TEST:16.853 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:23:20.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 15 11:23:21.040: INFO: Waiting up to 5m0s for pod "pod-9262d162-4fe5-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-9mdsl" to be "success or failure" Feb 15 11:23:21.187: INFO: Pod "pod-9262d162-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 146.958052ms Feb 15 11:23:23.202: INFO: Pod "pod-9262d162-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162424408s Feb 15 11:23:25.701: INFO: Pod "pod-9262d162-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.66126358s Feb 15 11:23:27.729: INFO: Pod "pod-9262d162-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.689367178s Feb 15 11:23:29.752: INFO: Pod "pod-9262d162-4fe5-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.711925945s STEP: Saw pod success Feb 15 11:23:29.752: INFO: Pod "pod-9262d162-4fe5-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:23:29.759: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9262d162-4fe5-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 11:23:30.004: INFO: Waiting for pod pod-9262d162-4fe5-11ea-960a-0242ac110007 to disappear Feb 15 11:23:30.074: INFO: Pod pod-9262d162-4fe5-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:23:30.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9mdsl" for this suite. Feb 15 11:23:36.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:23:36.430: INFO: namespace: e2e-tests-emptydir-9mdsl, resource: bindings, ignored listing per whitelist Feb 15 11:23:36.450: INFO: namespace e2e-tests-emptydir-9mdsl deletion completed in 6.277905707s • [SLOW TEST:15.587 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:23:36.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 11:23:36.680: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bb4fe22-4fe5-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-c7xjj" to be "success or failure" Feb 15 11:23:36.688: INFO: Pod "downwardapi-volume-9bb4fe22-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.221132ms Feb 15 11:23:38.706: INFO: Pod "downwardapi-volume-9bb4fe22-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026012532s Feb 15 11:23:40.744: INFO: Pod "downwardapi-volume-9bb4fe22-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064199365s Feb 15 11:23:42.915: INFO: Pod "downwardapi-volume-9bb4fe22-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234758425s Feb 15 11:23:44.941: INFO: Pod "downwardapi-volume-9bb4fe22-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.260684605s Feb 15 11:23:46.958: INFO: Pod "downwardapi-volume-9bb4fe22-4fe5-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.277735423s STEP: Saw pod success Feb 15 11:23:46.958: INFO: Pod "downwardapi-volume-9bb4fe22-4fe5-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:23:46.961: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9bb4fe22-4fe5-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 11:23:47.429: INFO: Waiting for pod downwardapi-volume-9bb4fe22-4fe5-11ea-960a-0242ac110007 to disappear Feb 15 11:23:47.462: INFO: Pod downwardapi-volume-9bb4fe22-4fe5-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:23:47.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c7xjj" for this suite. Feb 15 11:23:53.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:23:53.793: INFO: namespace: e2e-tests-projected-c7xjj, resource: bindings, ignored listing per whitelist Feb 15 11:23:53.957: INFO: namespace e2e-tests-projected-c7xjj deletion completed in 6.470280695s • [SLOW TEST:17.506 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:23:53.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 11:23:54.279: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 15 11:23:54.363: INFO: Number of nodes with available pods: 0 Feb 15 11:23:54.363: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 15 11:23:54.420: INFO: Number of nodes with available pods: 0 Feb 15 11:23:54.420: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:23:55.441: INFO: Number of nodes with available pods: 0 Feb 15 11:23:55.442: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:23:56.434: INFO: Number of nodes with available pods: 0 Feb 15 11:23:56.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:23:57.443: INFO: Number of nodes with available pods: 0 Feb 15 11:23:57.443: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:23:58.445: INFO: Number of nodes with available pods: 0 Feb 15 11:23:58.445: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:23:59.437: INFO: Number of nodes with available pods: 0 Feb 15 11:23:59.437: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:00.429: INFO: Number of nodes with available pods: 0 Feb 15 11:24:00.429: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:02.540: INFO: Number of nodes with available pods: 0 Feb 15 11:24:02.540: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:03.453: INFO: Number of nodes with available pods: 0 Feb 15 11:24:03.453: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:04.438: INFO: Number of nodes with available pods: 0 Feb 15 11:24:04.438: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:05.442: INFO: Number of nodes with available pods: 1 Feb 15 11:24:05.442: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 15 11:24:05.633: INFO: Number of nodes with available pods: 1 Feb 15 11:24:05.634: INFO: Number of running nodes: 0, number of available pods: 1 Feb 15 11:24:06.649: INFO: Number of nodes with available pods: 0 Feb 15 11:24:06.649: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 15 11:24:06.800: INFO: Number of nodes with available pods: 0 Feb 15 11:24:06.800: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:08.213: INFO: Number of nodes with available pods: 0 Feb 15 11:24:08.213: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:08.815: INFO: Number of nodes with available pods: 0 Feb 15 11:24:08.815: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:09.813: INFO: Number of nodes with available pods: 0 Feb 15 11:24:09.813: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:10.870: INFO: Number of nodes with available pods: 0 Feb 15 11:24:10.871: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:11.826: INFO: Number of nodes with available pods: 0 Feb 15 11:24:11.826: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:12.862: INFO: Number of nodes with available pods: 0 Feb 15 11:24:12.862: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:13.813: INFO: Number of nodes with available pods: 0 Feb 15 11:24:13.813: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:15.075: INFO: Number of nodes with available pods: 0 Feb 15 11:24:15.076: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:15.818: INFO: Number of nodes with available pods: 0 Feb 15 11:24:15.818: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:16.820: INFO: Number of nodes with available pods: 0 Feb 15 11:24:16.820: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:18.146: INFO: Number of nodes with available pods: 0 Feb 15 11:24:18.147: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:18.897: INFO: Number of nodes with available pods: 0 Feb 15 11:24:18.897: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:19.834: INFO: Number of nodes with available pods: 0 Feb 15 11:24:19.834: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:20.823: INFO: Number of nodes with available pods: 0 Feb 15 11:24:20.824: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 15 11:24:21.825: INFO: Number of nodes with available pods: 1 Feb 15 11:24:21.825: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wfjn5, will wait for the garbage collector to delete the pods Feb 15 11:24:21.922: INFO: Deleting DaemonSet.extensions daemon-set took: 20.061796ms Feb 15 11:24:22.123: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.865866ms Feb 15 11:24:32.881: INFO: Number of nodes with available pods: 0 Feb 15 11:24:32.881: INFO: Number of running nodes: 0, number of available pods: 0 Feb 15 11:24:32.890: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wfjn5/daemonsets","resourceVersion":"21747244"},"items":null} Feb 15 11:24:32.894: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wfjn5/pods","resourceVersion":"21747244"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:24:32.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-wfjn5" for this suite. Feb 15 11:24:39.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:24:39.194: INFO: namespace: e2e-tests-daemonsets-wfjn5, resource: bindings, ignored listing per whitelist Feb 15 11:24:39.224: INFO: namespace e2e-tests-daemonsets-wfjn5 deletion completed in 6.218486933s • [SLOW TEST:45.266 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:24:39.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 15 11:24:39.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-j8n5j' Feb 15 11:24:41.432: INFO: stderr: "" Feb 15 11:24:41.433: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 15 11:24:51.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-j8n5j -o json' Feb 15 11:24:51.709: INFO: stderr: "" Feb 15 11:24:51.709: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-15T11:24:41Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-j8n5j\",\n \"resourceVersion\": \"21747296\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-j8n5j/pods/e2e-test-nginx-pod\",\n \"uid\": \"c249fdf6-4fe5-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-76xql\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-76xql\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-76xql\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-15T11:24:41Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-15T11:24:49Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-15T11:24:49Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-15T11:24:41Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://826a749f9e34c717b099987acb2a8ee334528d5aa29ece24a096ae1585a3717c\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-15T11:24:48Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-15T11:24:41Z\"\n }\n}\n" STEP: replace the image in the pod Feb 15 11:24:51.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-j8n5j' Feb 15 11:24:52.400: INFO: stderr: "" Feb 15 11:24:52.400: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Feb 15 11:24:52.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-j8n5j' Feb 15 11:24:58.929: INFO: stderr: "" Feb 15 11:24:58.929: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:24:58.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j8n5j" for this suite. Feb 15 11:25:05.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:25:05.339: INFO: namespace: e2e-tests-kubectl-j8n5j, resource: bindings, ignored listing per whitelist Feb 15 11:25:05.378: INFO: namespace e2e-tests-kubectl-j8n5j deletion completed in 6.404755305s • [SLOW TEST:26.154 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:25:05.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 15 11:25:05.689: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:25:23.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-fhrmv" for this suite. Feb 15 11:25:30.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:25:30.229: INFO: namespace: e2e-tests-init-container-fhrmv, resource: bindings, ignored listing per whitelist Feb 15 11:25:30.288: INFO: namespace e2e-tests-init-container-fhrmv deletion completed in 6.397529767s • [SLOW TEST:24.910 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:25:30.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-mdbfv/configmap-test-df92112d-4fe5-11ea-960a-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 15 11:25:30.577: INFO: Waiting up to 5m0s for pod "pod-configmaps-df9402ae-4fe5-11ea-960a-0242ac110007" in namespace "e2e-tests-configmap-mdbfv" to be "success or failure" Feb 15 11:25:30.623: INFO: Pod "pod-configmaps-df9402ae-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 45.744162ms Feb 15 11:25:32.637: INFO: Pod "pod-configmaps-df9402ae-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059164683s Feb 15 11:25:34.655: INFO: Pod "pod-configmaps-df9402ae-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076947136s Feb 15 11:25:36.678: INFO: Pod "pod-configmaps-df9402ae-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100075923s Feb 15 11:25:38.698: INFO: Pod "pod-configmaps-df9402ae-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120015651s Feb 15 11:25:40.713: INFO: Pod "pod-configmaps-df9402ae-4fe5-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.135060661s STEP: Saw pod success Feb 15 11:25:40.713: INFO: Pod "pod-configmaps-df9402ae-4fe5-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:25:40.718: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-df9402ae-4fe5-11ea-960a-0242ac110007 container env-test: STEP: delete the pod Feb 15 11:25:40.841: INFO: Waiting for pod pod-configmaps-df9402ae-4fe5-11ea-960a-0242ac110007 to disappear Feb 15 11:25:40.857: INFO: Pod pod-configmaps-df9402ae-4fe5-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:25:40.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-mdbfv" for this suite. Feb 15 11:25:46.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:25:46.993: INFO: namespace: e2e-tests-configmap-mdbfv, resource: bindings, ignored listing per whitelist Feb 15 11:25:47.028: INFO: namespace e2e-tests-configmap-mdbfv deletion completed in 6.157148382s • [SLOW TEST:16.739 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:25:47.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 15 11:25:47.189: INFO: Waiting up to 5m0s for pod "client-containers-e97ef938-4fe5-11ea-960a-0242ac110007" in namespace "e2e-tests-containers-7xjck" to be "success or failure" Feb 15 11:25:47.198: INFO: Pod "client-containers-e97ef938-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.23763ms Feb 15 11:25:49.219: INFO: Pod "client-containers-e97ef938-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029416684s Feb 15 11:25:51.235: INFO: Pod "client-containers-e97ef938-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045517222s Feb 15 11:25:53.566: INFO: Pod "client-containers-e97ef938-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377291205s Feb 15 11:25:55.784: INFO: Pod "client-containers-e97ef938-4fe5-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.594983732s Feb 15 11:25:57.796: INFO: Pod "client-containers-e97ef938-4fe5-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.607367158s STEP: Saw pod success Feb 15 11:25:57.797: INFO: Pod "client-containers-e97ef938-4fe5-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:25:57.801: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-e97ef938-4fe5-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 11:25:57.882: INFO: Waiting for pod client-containers-e97ef938-4fe5-11ea-960a-0242ac110007 to disappear Feb 15 11:25:57.897: INFO: Pod client-containers-e97ef938-4fe5-11ea-960a-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:25:57.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-7xjck" for this suite. Feb 15 11:26:04.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:26:04.164: INFO: namespace: e2e-tests-containers-7xjck, resource: bindings, ignored listing per whitelist Feb 15 11:26:04.241: INFO: namespace e2e-tests-containers-7xjck deletion completed in 6.256531522s • [SLOW TEST:17.213 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:26:04.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Feb 15 11:26:04.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lmnc2' Feb 15 11:26:04.911: INFO: stderr: "" Feb 15 11:26:04.911: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Feb 15 11:26:05.938: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:26:05.938: INFO: Found 0 / 1 Feb 15 11:26:06.929: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:26:06.929: INFO: Found 0 / 1 Feb 15 11:26:07.944: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:26:07.945: INFO: Found 0 / 1 Feb 15 11:26:08.960: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:26:08.960: INFO: Found 0 / 1 Feb 15 11:26:10.335: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:26:10.336: INFO: Found 0 / 1 Feb 15 11:26:11.106: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:26:11.106: INFO: Found 0 / 1 Feb 15 11:26:12.055: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:26:12.055: INFO: Found 0 / 1 Feb 15 11:26:12.950: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:26:12.950: INFO: Found 0 / 1 Feb 15 11:26:13.940: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:26:13.940: INFO: Found 0 / 1 Feb 15 11:26:14.936: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:26:14.937: INFO: Found 1 / 1 Feb 15 11:26:14.937: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 15 11:26:14.951: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:26:14.951: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 15 11:26:14.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6fdvg redis-master --namespace=e2e-tests-kubectl-lmnc2' Feb 15 11:26:15.145: INFO: stderr: "" Feb 15 11:26:15.145: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Feb 11:26:12.728 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Feb 11:26:12.728 # Server started, Redis version 3.2.12\n1:M 15 Feb 11:26:12.728 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Feb 11:26:12.728 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 15 11:26:15.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6fdvg redis-master --namespace=e2e-tests-kubectl-lmnc2 --tail=1' Feb 15 11:26:15.306: INFO: stderr: "" Feb 15 11:26:15.306: INFO: stdout: "1:M 15 Feb 11:26:12.728 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 15 11:26:15.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6fdvg redis-master --namespace=e2e-tests-kubectl-lmnc2 --limit-bytes=1' Feb 15 11:26:15.471: INFO: stderr: "" Feb 15 11:26:15.471: INFO: stdout: " " STEP: exposing timestamps Feb 15 11:26:15.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6fdvg redis-master --namespace=e2e-tests-kubectl-lmnc2 --tail=1 --timestamps' Feb 15 11:26:15.600: INFO: stderr: "" Feb 15 11:26:15.601: INFO: stdout: "2020-02-15T11:26:12.730501707Z 1:M 15 Feb 11:26:12.728 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 15 11:26:18.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6fdvg redis-master --namespace=e2e-tests-kubectl-lmnc2 --since=1s' Feb 15 11:26:18.294: INFO: stderr: "" Feb 15 11:26:18.295: INFO: stdout: "" Feb 15 11:26:18.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6fdvg redis-master --namespace=e2e-tests-kubectl-lmnc2 --since=24h' Feb 15 11:26:18.629: INFO: stderr: "" Feb 15 11:26:18.629: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Feb 11:26:12.728 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Feb 11:26:12.728 # Server started, Redis version 3.2.12\n1:M 15 Feb 11:26:12.728 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Feb 11:26:12.728 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Feb 15 11:26:18.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lmnc2' Feb 15 11:26:18.788: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 11:26:18.788: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 15 11:26:18.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-lmnc2' Feb 15 11:26:18.957: INFO: stderr: "No resources found.\n" Feb 15 11:26:18.958: INFO: stdout: "" Feb 15 11:26:18.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-lmnc2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 15 11:26:19.134: INFO: stderr: "" Feb 15 11:26:19.135: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:26:19.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lmnc2" for this suite. Feb 15 11:26:43.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:26:43.296: INFO: namespace: e2e-tests-kubectl-lmnc2, resource: bindings, ignored listing per whitelist Feb 15 11:26:43.475: INFO: namespace e2e-tests-kubectl-lmnc2 deletion completed in 24.33008588s • [SLOW TEST:39.232 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:26:43.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Feb 15 11:26:43.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 15 11:26:43.898: INFO: stderr: "" Feb 15 11:26:43.898: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:26:43.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4xdvx" for this suite. Feb 15 11:26:50.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:26:50.256: INFO: namespace: e2e-tests-kubectl-4xdvx, resource: bindings, ignored listing per whitelist Feb 15 11:26:50.310: INFO: namespace e2e-tests-kubectl-4xdvx deletion completed in 6.386032582s • [SLOW TEST:6.834 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:26:50.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 11:26:50.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f32a4f1-4fe6-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-6hlqc" to be "success or failure" Feb 15 11:26:50.600: INFO: Pod "downwardapi-volume-0f32a4f1-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 23.669375ms Feb 15 11:26:52.875: INFO: Pod "downwardapi-volume-0f32a4f1-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298752137s Feb 15 11:26:54.886: INFO: Pod "downwardapi-volume-0f32a4f1-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309350781s Feb 15 11:26:57.199: INFO: Pod "downwardapi-volume-0f32a4f1-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.622434639s Feb 15 11:26:59.217: INFO: Pod "downwardapi-volume-0f32a4f1-4fe6-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.640719845s STEP: Saw pod success Feb 15 11:26:59.217: INFO: Pod "downwardapi-volume-0f32a4f1-4fe6-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:26:59.224: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0f32a4f1-4fe6-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 11:26:59.356: INFO: Waiting for pod downwardapi-volume-0f32a4f1-4fe6-11ea-960a-0242ac110007 to disappear Feb 15 11:26:59.378: INFO: Pod downwardapi-volume-0f32a4f1-4fe6-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:26:59.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6hlqc" for this suite. Feb 15 11:27:05.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:27:05.574: INFO: namespace: e2e-tests-downward-api-6hlqc, resource: bindings, ignored listing per whitelist Feb 15 11:27:05.597: INFO: namespace e2e-tests-downward-api-6hlqc deletion completed in 6.209844261s • [SLOW TEST:15.286 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:27:05.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 15 11:27:14.701: INFO: 3 pods remaining Feb 15 11:27:14.702: INFO: 0 pods has nil DeletionTimestamp Feb 15 11:27:14.702: INFO: STEP: Gathering metrics W0215 11:27:15.335053 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 15 11:27:15.335: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:27:15.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-dbqrn" for this suite. Feb 15 11:27:27.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:27:27.598: INFO: namespace: e2e-tests-gc-dbqrn, resource: bindings, ignored listing per whitelist Feb 15 11:27:27.632: INFO: namespace e2e-tests-gc-dbqrn deletion completed in 12.291207947s • [SLOW TEST:22.035 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:27:27.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 11:27:27.801: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:27:38.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jc8r9" for this suite. Feb 15 11:28:26.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:28:26.676: INFO: namespace: e2e-tests-pods-jc8r9, resource: bindings, ignored listing per whitelist Feb 15 11:28:26.705: INFO: namespace e2e-tests-pods-jc8r9 deletion completed in 48.317385041s • [SLOW TEST:59.073 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:28:26.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 15 11:28:27.178: INFO: Waiting up to 5m0s for pod "downward-api-48bf4fab-4fe6-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-wktb8" to be "success or failure" Feb 15 11:28:27.185: INFO: Pod "downward-api-48bf4fab-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.006609ms Feb 15 11:28:29.199: INFO: Pod "downward-api-48bf4fab-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021510203s Feb 15 11:28:31.215: INFO: Pod "downward-api-48bf4fab-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037165263s Feb 15 11:28:33.629: INFO: Pod "downward-api-48bf4fab-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450762286s Feb 15 11:28:35.685: INFO: Pod "downward-api-48bf4fab-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.507498085s Feb 15 11:28:37.703: INFO: Pod "downward-api-48bf4fab-4fe6-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.52496952s STEP: Saw pod success Feb 15 11:28:37.703: INFO: Pod "downward-api-48bf4fab-4fe6-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:28:37.710: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-48bf4fab-4fe6-11ea-960a-0242ac110007 container dapi-container: STEP: delete the pod Feb 15 11:28:37.807: INFO: Waiting for pod downward-api-48bf4fab-4fe6-11ea-960a-0242ac110007 to disappear Feb 15 11:28:37.865: INFO: Pod downward-api-48bf4fab-4fe6-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:28:37.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wktb8" for this suite. Feb 15 11:28:43.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:28:44.016: INFO: namespace: e2e-tests-downward-api-wktb8, resource: bindings, ignored listing per whitelist Feb 15 11:28:44.152: INFO: namespace e2e-tests-downward-api-wktb8 deletion completed in 6.268365597s • [SLOW TEST:17.447 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:28:44.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 15 11:28:44.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:44.886: INFO: stderr: "" Feb 15 11:28:44.886: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 15 11:28:44.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:45.181: INFO: stderr: "" Feb 15 11:28:45.182: INFO: stdout: "update-demo-nautilus-9nxlf update-demo-nautilus-pz4zb " Feb 15 11:28:45.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9nxlf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:45.320: INFO: stderr: "" Feb 15 11:28:45.320: INFO: stdout: "" Feb 15 11:28:45.321: INFO: update-demo-nautilus-9nxlf is created but not running Feb 15 11:28:50.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:51.332: INFO: stderr: "" Feb 15 11:28:51.332: INFO: stdout: "update-demo-nautilus-9nxlf update-demo-nautilus-pz4zb " Feb 15 11:28:51.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9nxlf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:51.741: INFO: stderr: "" Feb 15 11:28:51.741: INFO: stdout: "" Feb 15 11:28:51.741: INFO: update-demo-nautilus-9nxlf is created but not running Feb 15 11:28:56.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:56.906: INFO: stderr: "" Feb 15 11:28:56.906: INFO: stdout: "update-demo-nautilus-9nxlf update-demo-nautilus-pz4zb " Feb 15 11:28:56.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9nxlf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:57.036: INFO: stderr: "" Feb 15 11:28:57.036: INFO: stdout: "true" Feb 15 11:28:57.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9nxlf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:57.162: INFO: stderr: "" Feb 15 11:28:57.162: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 11:28:57.162: INFO: validating pod update-demo-nautilus-9nxlf Feb 15 11:28:57.188: INFO: got data: { "image": "nautilus.jpg" } Feb 15 11:28:57.188: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 11:28:57.188: INFO: update-demo-nautilus-9nxlf is verified up and running Feb 15 11:28:57.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pz4zb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:57.318: INFO: stderr: "" Feb 15 11:28:57.319: INFO: stdout: "true" Feb 15 11:28:57.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pz4zb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:57.464: INFO: stderr: "" Feb 15 11:28:57.464: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 11:28:57.465: INFO: validating pod update-demo-nautilus-pz4zb Feb 15 11:28:57.481: INFO: got data: { "image": "nautilus.jpg" } Feb 15 11:28:57.481: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 11:28:57.482: INFO: update-demo-nautilus-pz4zb is verified up and running STEP: using delete to clean up resources Feb 15 11:28:57.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:57.642: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 11:28:57.642: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 15 11:28:57.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-r4qs8' Feb 15 11:28:57.845: INFO: stderr: "No resources found.\n" Feb 15 11:28:57.846: INFO: stdout: "" Feb 15 11:28:57.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-r4qs8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 15 11:28:58.157: INFO: stderr: "" Feb 15 11:28:58.157: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:28:58.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r4qs8" for this suite. Feb 15 11:29:22.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:29:22.577: INFO: namespace: e2e-tests-kubectl-r4qs8, resource: bindings, ignored listing per whitelist Feb 15 11:29:22.641: INFO: namespace e2e-tests-kubectl-r4qs8 deletion completed in 24.450075633s • [SLOW TEST:38.488 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:29:22.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 15 11:29:22.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9wpc5' Feb 15 11:29:23.220: INFO: stderr: "" Feb 15 11:29:23.220: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 15 11:29:24.727: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:29:24.728: INFO: Found 0 / 1 Feb 15 11:29:25.242: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:29:25.243: INFO: Found 0 / 1 Feb 15 11:29:26.293: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:29:26.293: INFO: Found 0 / 1 Feb 15 11:29:27.252: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:29:27.253: INFO: Found 0 / 1 Feb 15 11:29:28.263: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:29:28.263: INFO: Found 0 / 1 Feb 15 11:29:29.452: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:29:29.453: INFO: Found 0 / 1 Feb 15 11:29:30.244: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:29:30.244: INFO: Found 0 / 1 Feb 15 11:29:31.238: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:29:31.238: INFO: Found 1 / 1 Feb 15 11:29:31.238: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 15 11:29:31.244: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:29:31.244: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 15 11:29:31.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-5sjrd --namespace=e2e-tests-kubectl-9wpc5 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 15 11:29:31.403: INFO: stderr: "" Feb 15 11:29:31.403: INFO: stdout: "pod/redis-master-5sjrd patched\n" STEP: checking annotations Feb 15 11:29:31.413: INFO: Selector matched 1 pods for map[app:redis] Feb 15 11:29:31.413: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:29:31.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9wpc5" for this suite. Feb 15 11:30:05.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:30:05.522: INFO: namespace: e2e-tests-kubectl-9wpc5, resource: bindings, ignored listing per whitelist Feb 15 11:30:05.567: INFO: namespace e2e-tests-kubectl-9wpc5 deletion completed in 34.149690698s • [SLOW TEST:42.925 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:30:05.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-839a09df-4fe6-11ea-960a-0242ac110007 STEP: Creating secret with name s-test-opt-upd-839a0ac7-4fe6-11ea-960a-0242ac110007 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-839a09df-4fe6-11ea-960a-0242ac110007 STEP: Updating secret s-test-opt-upd-839a0ac7-4fe6-11ea-960a-0242ac110007 STEP: Creating secret with name s-test-opt-create-839a0afb-4fe6-11ea-960a-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:30:20.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6h2x2" for this suite. Feb 15 11:30:44.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:30:44.255: INFO: namespace: e2e-tests-projected-6h2x2, resource: bindings, ignored listing per whitelist Feb 15 11:30:46.776: INFO: namespace e2e-tests-projected-6h2x2 deletion completed in 26.594238703s • [SLOW TEST:41.209 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:30:46.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-9c4219b7-4fe6-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 11:30:47.124: INFO: Waiting up to 5m0s for pod "pod-secrets-9c435a18-4fe6-11ea-960a-0242ac110007" in namespace "e2e-tests-secrets-2s4bk" to be "success or failure" Feb 15 11:30:47.136: INFO: Pod "pod-secrets-9c435a18-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.330888ms Feb 15 11:30:49.149: INFO: Pod "pod-secrets-9c435a18-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025081763s Feb 15 11:30:51.164: INFO: Pod "pod-secrets-9c435a18-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039426071s Feb 15 11:30:53.240: INFO: Pod "pod-secrets-9c435a18-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115197088s Feb 15 11:30:55.336: INFO: Pod "pod-secrets-9c435a18-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211216261s Feb 15 11:30:57.352: INFO: Pod "pod-secrets-9c435a18-4fe6-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.227489747s STEP: Saw pod success Feb 15 11:30:57.352: INFO: Pod "pod-secrets-9c435a18-4fe6-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:30:57.359: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-9c435a18-4fe6-11ea-960a-0242ac110007 container secret-env-test: STEP: delete the pod Feb 15 11:30:57.512: INFO: Waiting for pod pod-secrets-9c435a18-4fe6-11ea-960a-0242ac110007 to disappear Feb 15 11:30:57.532: INFO: Pod pod-secrets-9c435a18-4fe6-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:30:57.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2s4bk" for this suite. Feb 15 11:31:03.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:31:03.715: INFO: namespace: e2e-tests-secrets-2s4bk, resource: bindings, ignored listing per whitelist Feb 15 11:31:03.875: INFO: namespace e2e-tests-secrets-2s4bk deletion completed in 6.3273446s • [SLOW TEST:17.099 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:31:03.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:31:14.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-9xfrg" for this suite. Feb 15 11:31:20.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:31:20.913: INFO: namespace: e2e-tests-emptydir-wrapper-9xfrg, resource: bindings, ignored listing per whitelist Feb 15 11:31:20.966: INFO: namespace e2e-tests-emptydir-wrapper-9xfrg deletion completed in 6.55391242s • [SLOW TEST:17.090 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:31:20.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Feb 15 11:31:21.193: INFO: Waiting up to 5m0s for pod "client-containers-b0867c5c-4fe6-11ea-960a-0242ac110007" in namespace "e2e-tests-containers-pp7cb" to be "success or failure" Feb 15 11:31:21.215: INFO: Pod "client-containers-b0867c5c-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 22.137351ms Feb 15 11:31:23.227: INFO: Pod "client-containers-b0867c5c-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033999078s Feb 15 11:31:25.667: INFO: Pod "client-containers-b0867c5c-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.473815049s Feb 15 11:31:27.680: INFO: Pod "client-containers-b0867c5c-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.487357839s Feb 15 11:31:29.735: INFO: Pod "client-containers-b0867c5c-4fe6-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.542325174s STEP: Saw pod success Feb 15 11:31:29.736: INFO: Pod "client-containers-b0867c5c-4fe6-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:31:29.751: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-b0867c5c-4fe6-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 11:31:29.863: INFO: Waiting for pod client-containers-b0867c5c-4fe6-11ea-960a-0242ac110007 to disappear Feb 15 11:31:29.875: INFO: Pod client-containers-b0867c5c-4fe6-11ea-960a-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:31:29.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-pp7cb" for this suite. Feb 15 11:31:35.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:31:36.045: INFO: namespace: e2e-tests-containers-pp7cb, resource: bindings, ignored listing per whitelist Feb 15 11:31:36.063: INFO: namespace e2e-tests-containers-pp7cb deletion completed in 6.180530976s • [SLOW TEST:15.095 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:31:36.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0215 11:31:38.903316 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 15 11:31:38.903: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:31:38.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-49x6n" for this suite. Feb 15 11:31:47.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:31:47.356: INFO: namespace: e2e-tests-gc-49x6n, resource: bindings, ignored listing per whitelist Feb 15 11:31:47.364: INFO: namespace e2e-tests-gc-49x6n deletion completed in 8.443244662s • [SLOW TEST:11.301 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:31:47.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-29m8v STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 15 11:31:47.583: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 15 11:32:25.876: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-29m8v PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 11:32:25.876: INFO: >>> kubeConfig: /root/.kube/config I0215 11:32:25.999644 8 log.go:172] (0xc00090fef0) (0xc001d8e460) Create stream I0215 11:32:25.999852 8 log.go:172] (0xc00090fef0) (0xc001d8e460) Stream added, broadcasting: 1 I0215 11:32:26.009994 8 log.go:172] (0xc00090fef0) Reply frame received for 1 I0215 11:32:26.010143 8 log.go:172] (0xc00090fef0) (0xc001f2a460) Create stream I0215 11:32:26.010155 8 log.go:172] (0xc00090fef0) (0xc001f2a460) Stream added, broadcasting: 3 I0215 11:32:26.011578 8 log.go:172] (0xc00090fef0) Reply frame received for 3 I0215 11:32:26.011614 8 log.go:172] (0xc00090fef0) (0xc001fb9ae0) Create stream I0215 11:32:26.011628 8 log.go:172] (0xc00090fef0) (0xc001fb9ae0) Stream added, broadcasting: 5 I0215 11:32:26.013001 8 log.go:172] (0xc00090fef0) Reply frame received for 5 I0215 11:32:27.189464 8 log.go:172] (0xc00090fef0) Data frame received for 3 I0215 11:32:27.189611 8 log.go:172] (0xc001f2a460) (3) Data frame handling I0215 11:32:27.189662 8 log.go:172] (0xc001f2a460) (3) Data frame sent I0215 11:32:27.355614 8 log.go:172] (0xc00090fef0) (0xc001fb9ae0) Stream removed, broadcasting: 5 I0215 11:32:27.356065 8 log.go:172] (0xc00090fef0) Data frame received for 1 I0215 11:32:27.356101 8 log.go:172] (0xc001d8e460) (1) Data frame handling I0215 11:32:27.356166 8 log.go:172] (0xc001d8e460) (1) Data frame sent I0215 11:32:27.356259 8 log.go:172] (0xc00090fef0) (0xc001d8e460) Stream removed, broadcasting: 1 I0215 11:32:27.356827 8 log.go:172] (0xc00090fef0) (0xc001f2a460) Stream removed, broadcasting: 3 I0215 11:32:27.356893 8 log.go:172] (0xc00090fef0) Go away received I0215 11:32:27.357720 8 log.go:172] (0xc00090fef0) (0xc001d8e460) Stream removed, broadcasting: 1 I0215 11:32:27.357808 8 log.go:172] (0xc00090fef0) (0xc001f2a460) Stream removed, broadcasting: 3 I0215 11:32:27.357847 8 log.go:172] (0xc00090fef0) (0xc001fb9ae0) Stream removed, broadcasting: 5 Feb 15 11:32:27.358: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:32:27.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-29m8v" for this suite. Feb 15 11:32:51.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:32:51.732: INFO: namespace: e2e-tests-pod-network-test-29m8v, resource: bindings, ignored listing per whitelist Feb 15 11:32:51.883: INFO: namespace e2e-tests-pod-network-test-29m8v deletion completed in 24.495446044s • [SLOW TEST:64.518 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:32:51.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 11:32:52.374: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6ea39b3-4fe6-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-c8sg4" to be "success or failure" Feb 15 11:32:52.387: INFO: Pod "downwardapi-volume-e6ea39b3-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.067349ms Feb 15 11:32:54.413: INFO: Pod "downwardapi-volume-e6ea39b3-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038958211s Feb 15 11:32:56.441: INFO: Pod "downwardapi-volume-e6ea39b3-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066992265s Feb 15 11:32:58.558: INFO: Pod "downwardapi-volume-e6ea39b3-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184321979s Feb 15 11:33:00.590: INFO: Pod "downwardapi-volume-e6ea39b3-4fe6-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21603975s Feb 15 11:33:02.660: INFO: Pod "downwardapi-volume-e6ea39b3-4fe6-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.285998386s STEP: Saw pod success Feb 15 11:33:02.661: INFO: Pod "downwardapi-volume-e6ea39b3-4fe6-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:33:02.700: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e6ea39b3-4fe6-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 11:33:03.407: INFO: Waiting for pod downwardapi-volume-e6ea39b3-4fe6-11ea-960a-0242ac110007 to disappear Feb 15 11:33:03.648: INFO: Pod downwardapi-volume-e6ea39b3-4fe6-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:33:03.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c8sg4" for this suite. Feb 15 11:33:11.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:33:11.739: INFO: namespace: e2e-tests-projected-c8sg4, resource: bindings, ignored listing per whitelist Feb 15 11:33:11.897: INFO: namespace e2e-tests-projected-c8sg4 deletion completed in 8.236752785s • [SLOW TEST:20.013 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:33:11.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 15 11:33:12.104: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:33:36.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-jlnrx" for this suite. Feb 15 11:34:00.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:34:00.995: INFO: namespace: e2e-tests-init-container-jlnrx, resource: bindings, ignored listing per whitelist Feb 15 11:34:01.069: INFO: namespace e2e-tests-init-container-jlnrx deletion completed in 24.295123101s • [SLOW TEST:49.171 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:34:01.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-rtnnh STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-rtnnh STEP: Deleting pre-stop pod Feb 15 11:34:24.361: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:34:24.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-rtnnh" for this suite. Feb 15 11:35:04.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:35:04.697: INFO: namespace: e2e-tests-prestop-rtnnh, resource: bindings, ignored listing per whitelist Feb 15 11:35:04.720: INFO: namespace e2e-tests-prestop-rtnnh deletion completed in 40.317060906s • [SLOW TEST:63.650 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:35:04.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 15 11:35:04.929: INFO: Waiting up to 5m0s for pod "pod-35ec701c-4fe7-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-b9sv2" to be "success or failure" Feb 15 11:35:04.953: INFO: Pod "pod-35ec701c-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 23.487142ms Feb 15 11:35:07.066: INFO: Pod "pod-35ec701c-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136698761s Feb 15 11:35:09.081: INFO: Pod "pod-35ec701c-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150889144s Feb 15 11:35:11.300: INFO: Pod "pod-35ec701c-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.369872291s Feb 15 11:35:13.313: INFO: Pod "pod-35ec701c-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.383163139s Feb 15 11:35:15.331: INFO: Pod "pod-35ec701c-4fe7-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.400734282s STEP: Saw pod success Feb 15 11:35:15.331: INFO: Pod "pod-35ec701c-4fe7-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:35:15.336: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-35ec701c-4fe7-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 11:35:15.915: INFO: Waiting for pod pod-35ec701c-4fe7-11ea-960a-0242ac110007 to disappear Feb 15 11:35:15.987: INFO: Pod pod-35ec701c-4fe7-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:35:15.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-b9sv2" for this suite. Feb 15 11:35:22.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:35:22.203: INFO: namespace: e2e-tests-emptydir-b9sv2, resource: bindings, ignored listing per whitelist Feb 15 11:35:22.268: INFO: namespace e2e-tests-emptydir-b9sv2 deletion completed in 6.204394013s • [SLOW TEST:17.548 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:35:22.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-406e02d7-4fe7-11ea-960a-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 15 11:35:22.566: INFO: Waiting up to 5m0s for pod "pod-configmaps-4070553d-4fe7-11ea-960a-0242ac110007" in namespace "e2e-tests-configmap-xmrct" to be "success or failure" Feb 15 11:35:22.580: INFO: Pod "pod-configmaps-4070553d-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.427143ms Feb 15 11:35:24.795: INFO: Pod "pod-configmaps-4070553d-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228648819s Feb 15 11:35:26.811: INFO: Pod "pod-configmaps-4070553d-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245123668s Feb 15 11:35:29.612: INFO: Pod "pod-configmaps-4070553d-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.046077267s Feb 15 11:35:31.731: INFO: Pod "pod-configmaps-4070553d-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.165212979s Feb 15 11:35:33.762: INFO: Pod "pod-configmaps-4070553d-4fe7-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.196099692s STEP: Saw pod success Feb 15 11:35:33.763: INFO: Pod "pod-configmaps-4070553d-4fe7-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:35:33.777: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4070553d-4fe7-11ea-960a-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 15 11:35:34.641: INFO: Waiting for pod pod-configmaps-4070553d-4fe7-11ea-960a-0242ac110007 to disappear Feb 15 11:35:34.649: INFO: Pod pod-configmaps-4070553d-4fe7-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:35:34.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xmrct" for this suite. Feb 15 11:35:40.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:35:40.887: INFO: namespace: e2e-tests-configmap-xmrct, resource: bindings, ignored listing per whitelist Feb 15 11:35:41.003: INFO: namespace e2e-tests-configmap-xmrct deletion completed in 6.340092854s • [SLOW TEST:18.734 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:35:41.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 15 11:35:41.205: INFO: Waiting up to 5m0s for pod "downward-api-4b8f4454-4fe7-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-njjlw" to be "success or failure" Feb 15 11:35:41.222: INFO: Pod "downward-api-4b8f4454-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.509978ms Feb 15 11:35:43.248: INFO: Pod "downward-api-4b8f4454-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042852938s Feb 15 11:35:45.262: INFO: Pod "downward-api-4b8f4454-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05647278s Feb 15 11:35:47.840: INFO: Pod "downward-api-4b8f4454-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635020783s Feb 15 11:35:49.982: INFO: Pod "downward-api-4b8f4454-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.777097751s Feb 15 11:35:52.001: INFO: Pod "downward-api-4b8f4454-4fe7-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.79568741s STEP: Saw pod success Feb 15 11:35:52.001: INFO: Pod "downward-api-4b8f4454-4fe7-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:35:52.009: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-4b8f4454-4fe7-11ea-960a-0242ac110007 container dapi-container: STEP: delete the pod Feb 15 11:35:52.130: INFO: Waiting for pod downward-api-4b8f4454-4fe7-11ea-960a-0242ac110007 to disappear Feb 15 11:35:52.155: INFO: Pod downward-api-4b8f4454-4fe7-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:35:52.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-njjlw" for this suite. Feb 15 11:35:58.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:35:58.513: INFO: namespace: e2e-tests-downward-api-njjlw, resource: bindings, ignored listing per whitelist Feb 15 11:35:58.529: INFO: namespace e2e-tests-downward-api-njjlw deletion completed in 6.291299504s • [SLOW TEST:17.526 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:35:58.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 11:35:58.884: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5616001d-4fe7-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-489xg" to be "success or failure" Feb 15 11:35:58.896: INFO: Pod "downwardapi-volume-5616001d-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.766899ms Feb 15 11:36:01.982: INFO: Pod "downwardapi-volume-5616001d-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.097322761s Feb 15 11:36:04.021: INFO: Pod "downwardapi-volume-5616001d-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.136673421s Feb 15 11:36:06.067: INFO: Pod "downwardapi-volume-5616001d-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.182920787s Feb 15 11:36:08.088: INFO: Pod "downwardapi-volume-5616001d-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.203737287s Feb 15 11:36:10.112: INFO: Pod "downwardapi-volume-5616001d-4fe7-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.22764965s STEP: Saw pod success Feb 15 11:36:10.112: INFO: Pod "downwardapi-volume-5616001d-4fe7-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:36:10.116: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5616001d-4fe7-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 11:36:10.202: INFO: Waiting for pod downwardapi-volume-5616001d-4fe7-11ea-960a-0242ac110007 to disappear Feb 15 11:36:10.278: INFO: Pod downwardapi-volume-5616001d-4fe7-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:36:10.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-489xg" for this suite. Feb 15 11:36:16.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:36:16.368: INFO: namespace: e2e-tests-projected-489xg, resource: bindings, ignored listing per whitelist Feb 15 11:36:16.513: INFO: namespace e2e-tests-projected-489xg deletion completed in 6.227237909s • [SLOW TEST:17.983 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:36:16.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 11:36:16.870: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60ce7c1f-4fe7-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-dsk65" to be "success or failure" Feb 15 11:36:17.084: INFO: Pod "downwardapi-volume-60ce7c1f-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 214.10385ms Feb 15 11:36:19.159: INFO: Pod "downwardapi-volume-60ce7c1f-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289186223s Feb 15 11:36:21.174: INFO: Pod "downwardapi-volume-60ce7c1f-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304463513s Feb 15 11:36:23.211: INFO: Pod "downwardapi-volume-60ce7c1f-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340894349s Feb 15 11:36:25.225: INFO: Pod "downwardapi-volume-60ce7c1f-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35486809s Feb 15 11:36:27.275: INFO: Pod "downwardapi-volume-60ce7c1f-4fe7-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.404914761s STEP: Saw pod success Feb 15 11:36:27.275: INFO: Pod "downwardapi-volume-60ce7c1f-4fe7-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:36:27.299: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-60ce7c1f-4fe7-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 11:36:27.452: INFO: Waiting for pod downwardapi-volume-60ce7c1f-4fe7-11ea-960a-0242ac110007 to disappear Feb 15 11:36:27.469: INFO: Pod downwardapi-volume-60ce7c1f-4fe7-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:36:27.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dsk65" for this suite. Feb 15 11:36:33.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:36:33.755: INFO: namespace: e2e-tests-projected-dsk65, resource: bindings, ignored listing per whitelist Feb 15 11:36:33.868: INFO: namespace e2e-tests-projected-dsk65 deletion completed in 6.382413834s • [SLOW TEST:17.355 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:36:33.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:36:34.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-29rqr" for this suite. Feb 15 11:36:58.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:36:58.377: INFO: namespace: e2e-tests-pods-29rqr, resource: bindings, ignored listing per whitelist Feb 15 11:36:58.638: INFO: namespace e2e-tests-pods-29rqr deletion completed in 24.365171445s • [SLOW TEST:24.770 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:36:58.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-lr92w Feb 15 11:37:08.900: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-lr92w STEP: checking the pod's current state and verifying that restartCount is present Feb 15 11:37:08.907: INFO: Initial restart count of pod liveness-exec is 0 Feb 15 11:38:06.321: INFO: Restart count of pod e2e-tests-container-probe-lr92w/liveness-exec is now 1 (57.413654145s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:38:06.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-lr92w" for this suite. Feb 15 11:38:14.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:38:14.660: INFO: namespace: e2e-tests-container-probe-lr92w, resource: bindings, ignored listing per whitelist Feb 15 11:38:14.720: INFO: namespace e2e-tests-container-probe-lr92w deletion completed in 8.334856647s • [SLOW TEST:76.081 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:38:14.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:38:15.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-r7bqf" for this suite. Feb 15 11:38:21.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:38:21.591: INFO: namespace: e2e-tests-kubelet-test-r7bqf, resource: bindings, ignored listing per whitelist Feb 15 11:38:21.595: INFO: namespace e2e-tests-kubelet-test-r7bqf deletion completed in 6.181452525s • [SLOW TEST:6.873 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:38:21.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-mxn56 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-mxn56 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-mxn56 Feb 15 11:38:21.867: INFO: Found 0 stateful pods, waiting for 1 Feb 15 11:38:31.907: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 15 11:38:31.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mxn56 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 11:38:33.004: INFO: stderr: "I0215 11:38:32.155355 1174 log.go:172] (0xc0001380b0) (0xc0008b6500) Create stream\nI0215 11:38:32.155785 1174 log.go:172] (0xc0001380b0) (0xc0008b6500) Stream added, broadcasting: 1\nI0215 11:38:32.165485 1174 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0215 11:38:32.165664 1174 log.go:172] (0xc0001380b0) (0xc0002bac80) Create stream\nI0215 11:38:32.165694 1174 log.go:172] (0xc0001380b0) (0xc0002bac80) Stream added, broadcasting: 3\nI0215 11:38:32.167464 1174 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0215 11:38:32.167548 1174 log.go:172] (0xc0001380b0) (0xc000608000) Create stream\nI0215 11:38:32.167569 1174 log.go:172] (0xc0001380b0) (0xc000608000) Stream added, broadcasting: 5\nI0215 11:38:32.170456 1174 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0215 11:38:32.617776 1174 log.go:172] (0xc0001380b0) Data frame received for 3\nI0215 11:38:32.617963 1174 log.go:172] (0xc0002bac80) (3) Data frame handling\nI0215 11:38:32.618067 1174 log.go:172] (0xc0002bac80) (3) Data frame sent\nI0215 11:38:32.989178 1174 log.go:172] (0xc0001380b0) Data frame received for 1\nI0215 11:38:32.989467 1174 log.go:172] (0xc0001380b0) (0xc000608000) Stream removed, broadcasting: 5\nI0215 11:38:32.989640 1174 log.go:172] (0xc0001380b0) (0xc0002bac80) Stream removed, broadcasting: 3\nI0215 11:38:32.989740 1174 log.go:172] (0xc0008b6500) (1) Data frame handling\nI0215 11:38:32.989887 1174 log.go:172] (0xc0008b6500) (1) Data frame sent\nI0215 11:38:32.989923 1174 log.go:172] (0xc0001380b0) (0xc0008b6500) Stream removed, broadcasting: 1\nI0215 11:38:32.989943 1174 log.go:172] (0xc0001380b0) Go away received\nI0215 11:38:32.990767 1174 log.go:172] (0xc0001380b0) (0xc0008b6500) Stream removed, broadcasting: 1\nI0215 11:38:32.990791 1174 log.go:172] (0xc0001380b0) (0xc0002bac80) Stream removed, broadcasting: 3\nI0215 11:38:32.990805 1174 log.go:172] (0xc0001380b0) (0xc000608000) Stream removed, broadcasting: 5\n" Feb 15 11:38:33.005: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 11:38:33.005: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 11:38:33.023: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 15 11:38:33.023: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 11:38:33.074: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998151s Feb 15 11:38:34.090: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982351231s Feb 15 11:38:35.109: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.965946936s Feb 15 11:38:36.124: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.947481036s Feb 15 11:38:37.162: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.932213517s Feb 15 11:38:38.178: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.893867474s Feb 15 11:38:39.240: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.877824365s Feb 15 11:38:40.257: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.816729722s Feb 15 11:38:41.279: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.799488113s Feb 15 11:38:42.293: INFO: Verifying statefulset ss doesn't scale past 1 for another 777.629389ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-mxn56 Feb 15 11:38:43.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mxn56 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:38:44.022: INFO: stderr: "I0215 11:38:43.547323 1195 log.go:172] (0xc0001386e0) (0xc00072e640) Create stream\nI0215 11:38:43.548101 1195 log.go:172] (0xc0001386e0) (0xc00072e640) Stream added, broadcasting: 1\nI0215 11:38:43.560783 1195 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0215 11:38:43.560883 1195 log.go:172] (0xc0001386e0) (0xc0005dcd20) Create stream\nI0215 11:38:43.560900 1195 log.go:172] (0xc0001386e0) (0xc0005dcd20) Stream added, broadcasting: 3\nI0215 11:38:43.563467 1195 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0215 11:38:43.563500 1195 log.go:172] (0xc0001386e0) (0xc00072e6e0) Create stream\nI0215 11:38:43.563511 1195 log.go:172] (0xc0001386e0) (0xc00072e6e0) Stream added, broadcasting: 5\nI0215 11:38:43.565346 1195 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0215 11:38:43.738096 1195 log.go:172] (0xc0001386e0) Data frame received for 3\nI0215 11:38:43.738221 1195 log.go:172] (0xc0005dcd20) (3) Data frame handling\nI0215 11:38:43.738273 1195 log.go:172] (0xc0005dcd20) (3) Data frame sent\nI0215 11:38:43.999458 1195 log.go:172] (0xc0001386e0) Data frame received for 1\nI0215 11:38:43.999583 1195 log.go:172] (0xc00072e640) (1) Data frame handling\nI0215 11:38:43.999611 1195 log.go:172] (0xc00072e640) (1) Data frame sent\nI0215 11:38:43.999632 1195 log.go:172] (0xc0001386e0) (0xc00072e640) Stream removed, broadcasting: 1\nI0215 11:38:44.002216 1195 log.go:172] (0xc0001386e0) (0xc0005dcd20) Stream removed, broadcasting: 3\nI0215 11:38:44.002914 1195 log.go:172] (0xc0001386e0) (0xc00072e6e0) Stream removed, broadcasting: 5\nI0215 11:38:44.002968 1195 log.go:172] (0xc0001386e0) Go away received\nI0215 11:38:44.003209 1195 log.go:172] (0xc0001386e0) (0xc00072e640) Stream removed, broadcasting: 1\nI0215 11:38:44.003312 1195 log.go:172] (0xc0001386e0) (0xc0005dcd20) Stream removed, broadcasting: 3\nI0215 11:38:44.003335 1195 log.go:172] (0xc0001386e0) (0xc00072e6e0) Stream removed, broadcasting: 5\n" Feb 15 11:38:44.023: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 11:38:44.023: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 11:38:44.066: INFO: Found 1 stateful pods, waiting for 3 Feb 15 11:38:54.096: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 15 11:38:54.096: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 15 11:38:54.096: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 15 11:39:04.133: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 15 11:39:04.134: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 15 11:39:04.134: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 15 11:39:04.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mxn56 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 11:39:05.123: INFO: stderr: "I0215 11:39:04.388434 1217 log.go:172] (0xc000138000) (0xc000367220) Create stream\nI0215 11:39:04.388784 1217 log.go:172] (0xc000138000) (0xc000367220) Stream added, broadcasting: 1\nI0215 11:39:04.394624 1217 log.go:172] (0xc000138000) Reply frame received for 1\nI0215 11:39:04.394677 1217 log.go:172] (0xc000138000) (0xc000323040) Create stream\nI0215 11:39:04.394687 1217 log.go:172] (0xc000138000) (0xc000323040) Stream added, broadcasting: 3\nI0215 11:39:04.395547 1217 log.go:172] (0xc000138000) Reply frame received for 3\nI0215 11:39:04.395571 1217 log.go:172] (0xc000138000) (0xc0002b0000) Create stream\nI0215 11:39:04.395579 1217 log.go:172] (0xc000138000) (0xc0002b0000) Stream added, broadcasting: 5\nI0215 11:39:04.396338 1217 log.go:172] (0xc000138000) Reply frame received for 5\nI0215 11:39:04.785920 1217 log.go:172] (0xc000138000) Data frame received for 3\nI0215 11:39:04.786241 1217 log.go:172] (0xc000323040) (3) Data frame handling\nI0215 11:39:04.786301 1217 log.go:172] (0xc000323040) (3) Data frame sent\nI0215 11:39:05.110654 1217 log.go:172] (0xc000138000) (0xc0002b0000) Stream removed, broadcasting: 5\nI0215 11:39:05.110925 1217 log.go:172] (0xc000138000) Data frame received for 1\nI0215 11:39:05.110937 1217 log.go:172] (0xc000367220) (1) Data frame handling\nI0215 11:39:05.110958 1217 log.go:172] (0xc000367220) (1) Data frame sent\nI0215 11:39:05.111009 1217 log.go:172] (0xc000138000) (0xc000367220) Stream removed, broadcasting: 1\nI0215 11:39:05.111776 1217 log.go:172] (0xc000138000) (0xc000323040) Stream removed, broadcasting: 3\nI0215 11:39:05.111802 1217 log.go:172] (0xc000138000) Go away received\nI0215 11:39:05.112181 1217 log.go:172] (0xc000138000) (0xc000367220) Stream removed, broadcasting: 1\nI0215 11:39:05.112204 1217 log.go:172] (0xc000138000) (0xc000323040) Stream removed, broadcasting: 3\nI0215 11:39:05.112210 1217 log.go:172] (0xc000138000) (0xc0002b0000) Stream removed, broadcasting: 5\n" Feb 15 11:39:05.124: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 11:39:05.124: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 11:39:05.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mxn56 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 11:39:05.671: INFO: stderr: "I0215 11:39:05.395229 1239 log.go:172] (0xc000138790) (0xc0005e9360) Create stream\nI0215 11:39:05.395541 1239 log.go:172] (0xc000138790) (0xc0005e9360) Stream added, broadcasting: 1\nI0215 11:39:05.399425 1239 log.go:172] (0xc000138790) Reply frame received for 1\nI0215 11:39:05.399456 1239 log.go:172] (0xc000138790) (0xc0005e9400) Create stream\nI0215 11:39:05.399463 1239 log.go:172] (0xc000138790) (0xc0005e9400) Stream added, broadcasting: 3\nI0215 11:39:05.400427 1239 log.go:172] (0xc000138790) Reply frame received for 3\nI0215 11:39:05.400471 1239 log.go:172] (0xc000138790) (0xc00054a000) Create stream\nI0215 11:39:05.400498 1239 log.go:172] (0xc000138790) (0xc00054a000) Stream added, broadcasting: 5\nI0215 11:39:05.401492 1239 log.go:172] (0xc000138790) Reply frame received for 5\nI0215 11:39:05.544553 1239 log.go:172] (0xc000138790) Data frame received for 3\nI0215 11:39:05.544640 1239 log.go:172] (0xc0005e9400) (3) Data frame handling\nI0215 11:39:05.544664 1239 log.go:172] (0xc0005e9400) (3) Data frame sent\nI0215 11:39:05.656087 1239 log.go:172] (0xc000138790) Data frame received for 1\nI0215 11:39:05.656241 1239 log.go:172] (0xc0005e9360) (1) Data frame handling\nI0215 11:39:05.656289 1239 log.go:172] (0xc0005e9360) (1) Data frame sent\nI0215 11:39:05.656428 1239 log.go:172] (0xc000138790) (0xc0005e9360) Stream removed, broadcasting: 1\nI0215 11:39:05.657112 1239 log.go:172] (0xc000138790) (0xc0005e9400) Stream removed, broadcasting: 3\nI0215 11:39:05.657714 1239 log.go:172] (0xc000138790) (0xc00054a000) Stream removed, broadcasting: 5\nI0215 11:39:05.657809 1239 log.go:172] (0xc000138790) (0xc0005e9360) Stream removed, broadcasting: 1\nI0215 11:39:05.657831 1239 log.go:172] (0xc000138790) (0xc0005e9400) Stream removed, broadcasting: 3\nI0215 11:39:05.657851 1239 log.go:172] (0xc000138790) (0xc00054a000) Stream removed, broadcasting: 5\n" Feb 15 11:39:05.672: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 11:39:05.672: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 11:39:05.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mxn56 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 11:39:06.345: INFO: stderr: "I0215 11:39:06.064622 1260 log.go:172] (0xc00070a370) (0xc00074e640) Create stream\nI0215 11:39:06.064919 1260 log.go:172] (0xc00070a370) (0xc00074e640) Stream added, broadcasting: 1\nI0215 11:39:06.071129 1260 log.go:172] (0xc00070a370) Reply frame received for 1\nI0215 11:39:06.071188 1260 log.go:172] (0xc00070a370) (0xc0007a8c80) Create stream\nI0215 11:39:06.071205 1260 log.go:172] (0xc00070a370) (0xc0007a8c80) Stream added, broadcasting: 3\nI0215 11:39:06.072444 1260 log.go:172] (0xc00070a370) Reply frame received for 3\nI0215 11:39:06.072466 1260 log.go:172] (0xc00070a370) (0xc0007a8dc0) Create stream\nI0215 11:39:06.072472 1260 log.go:172] (0xc00070a370) (0xc0007a8dc0) Stream added, broadcasting: 5\nI0215 11:39:06.073528 1260 log.go:172] (0xc00070a370) Reply frame received for 5\nI0215 11:39:06.238191 1260 log.go:172] (0xc00070a370) Data frame received for 3\nI0215 11:39:06.238282 1260 log.go:172] (0xc0007a8c80) (3) Data frame handling\nI0215 11:39:06.238308 1260 log.go:172] (0xc0007a8c80) (3) Data frame sent\nI0215 11:39:06.332430 1260 log.go:172] (0xc00070a370) Data frame received for 1\nI0215 11:39:06.332539 1260 log.go:172] (0xc00074e640) (1) Data frame handling\nI0215 11:39:06.332564 1260 log.go:172] (0xc00074e640) (1) Data frame sent\nI0215 11:39:06.333201 1260 log.go:172] (0xc00070a370) (0xc00074e640) Stream removed, broadcasting: 1\nI0215 11:39:06.333304 1260 log.go:172] (0xc00070a370) (0xc0007a8c80) Stream removed, broadcasting: 3\nI0215 11:39:06.334165 1260 log.go:172] (0xc00070a370) (0xc0007a8dc0) Stream removed, broadcasting: 5\nI0215 11:39:06.334204 1260 log.go:172] (0xc00070a370) (0xc00074e640) Stream removed, broadcasting: 1\nI0215 11:39:06.334213 1260 log.go:172] (0xc00070a370) (0xc0007a8c80) Stream removed, broadcasting: 3\nI0215 11:39:06.334219 1260 log.go:172] (0xc00070a370) (0xc0007a8dc0) Stream removed, broadcasting: 5\n" Feb 15 11:39:06.346: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 11:39:06.346: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 11:39:06.346: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 11:39:06.355: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 15 11:39:16.380: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 15 11:39:16.381: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 15 11:39:16.381: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 15 11:39:16.528: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999989s Feb 15 11:39:17.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.888641274s Feb 15 11:39:18.587: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.868532529s Feb 15 11:39:19.626: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.829912594s Feb 15 11:39:20.654: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.790546662s Feb 15 11:39:21.687: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.763126667s Feb 15 11:39:22.712: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.729911139s Feb 15 11:39:23.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.70485969s Feb 15 11:39:24.757: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.693208232s Feb 15 11:39:25.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 659.565398ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-mxn56 Feb 15 11:39:26.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mxn56 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:39:27.336: INFO: stderr: "I0215 11:39:27.029593 1282 log.go:172] (0xc0001386e0) (0xc00078f180) Create stream\nI0215 11:39:27.029921 1282 log.go:172] (0xc0001386e0) (0xc00078f180) Stream added, broadcasting: 1\nI0215 11:39:27.038529 1282 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0215 11:39:27.038722 1282 log.go:172] (0xc0001386e0) (0xc0006e4000) Create stream\nI0215 11:39:27.038743 1282 log.go:172] (0xc0001386e0) (0xc0006e4000) Stream added, broadcasting: 3\nI0215 11:39:27.040484 1282 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0215 11:39:27.040523 1282 log.go:172] (0xc0001386e0) (0xc0006e4140) Create stream\nI0215 11:39:27.040542 1282 log.go:172] (0xc0001386e0) (0xc0006e4140) Stream added, broadcasting: 5\nI0215 11:39:27.042488 1282 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0215 11:39:27.173356 1282 log.go:172] (0xc0001386e0) Data frame received for 3\nI0215 11:39:27.173581 1282 log.go:172] (0xc0006e4000) (3) Data frame handling\nI0215 11:39:27.173632 1282 log.go:172] (0xc0006e4000) (3) Data frame sent\nI0215 11:39:27.312836 1282 log.go:172] (0xc0001386e0) Data frame received for 1\nI0215 11:39:27.313080 1282 log.go:172] (0xc0001386e0) (0xc0006e4000) Stream removed, broadcasting: 3\nI0215 11:39:27.313203 1282 log.go:172] (0xc00078f180) (1) Data frame handling\nI0215 11:39:27.313263 1282 log.go:172] (0xc00078f180) (1) Data frame sent\nI0215 11:39:27.313352 1282 log.go:172] (0xc0001386e0) (0xc0006e4140) Stream removed, broadcasting: 5\nI0215 11:39:27.313451 1282 log.go:172] (0xc0001386e0) (0xc00078f180) Stream removed, broadcasting: 1\nI0215 11:39:27.313470 1282 log.go:172] (0xc0001386e0) Go away received\nI0215 11:39:27.314275 1282 log.go:172] (0xc0001386e0) (0xc00078f180) Stream removed, broadcasting: 1\nI0215 11:39:27.314392 1282 log.go:172] (0xc0001386e0) (0xc0006e4000) Stream removed, broadcasting: 3\nI0215 11:39:27.314435 1282 log.go:172] (0xc0001386e0) (0xc0006e4140) Stream removed, broadcasting: 5\n" Feb 15 11:39:27.337: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 11:39:27.337: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 11:39:27.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mxn56 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:39:28.018: INFO: stderr: "I0215 11:39:27.548986 1304 log.go:172] (0xc000712370) (0xc0007ae640) Create stream\nI0215 11:39:27.549422 1304 log.go:172] (0xc000712370) (0xc0007ae640) Stream added, broadcasting: 1\nI0215 11:39:27.554248 1304 log.go:172] (0xc000712370) Reply frame received for 1\nI0215 11:39:27.554293 1304 log.go:172] (0xc000712370) (0xc0005c4d20) Create stream\nI0215 11:39:27.554304 1304 log.go:172] (0xc000712370) (0xc0005c4d20) Stream added, broadcasting: 3\nI0215 11:39:27.555141 1304 log.go:172] (0xc000712370) Reply frame received for 3\nI0215 11:39:27.555164 1304 log.go:172] (0xc000712370) (0xc00051c000) Create stream\nI0215 11:39:27.555173 1304 log.go:172] (0xc000712370) (0xc00051c000) Stream added, broadcasting: 5\nI0215 11:39:27.555954 1304 log.go:172] (0xc000712370) Reply frame received for 5\nI0215 11:39:27.718617 1304 log.go:172] (0xc000712370) Data frame received for 3\nI0215 11:39:27.718733 1304 log.go:172] (0xc0005c4d20) (3) Data frame handling\nI0215 11:39:27.718768 1304 log.go:172] (0xc0005c4d20) (3) Data frame sent\nI0215 11:39:28.006399 1304 log.go:172] (0xc000712370) Data frame received for 1\nI0215 11:39:28.006521 1304 log.go:172] (0xc0007ae640) (1) Data frame handling\nI0215 11:39:28.006568 1304 log.go:172] (0xc0007ae640) (1) Data frame sent\nI0215 11:39:28.006921 1304 log.go:172] (0xc000712370) (0xc0007ae640) Stream removed, broadcasting: 1\nI0215 11:39:28.007150 1304 log.go:172] (0xc000712370) (0xc0005c4d20) Stream removed, broadcasting: 3\nI0215 11:39:28.007574 1304 log.go:172] (0xc000712370) (0xc00051c000) Stream removed, broadcasting: 5\nI0215 11:39:28.007650 1304 log.go:172] (0xc000712370) (0xc0007ae640) Stream removed, broadcasting: 1\nI0215 11:39:28.007658 1304 log.go:172] (0xc000712370) (0xc0005c4d20) Stream removed, broadcasting: 3\nI0215 11:39:28.007668 1304 log.go:172] (0xc000712370) (0xc00051c000) Stream removed, broadcasting: 5\n" Feb 15 11:39:28.018: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 11:39:28.018: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 11:39:28.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mxn56 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:39:28.424: INFO: stderr: "I0215 11:39:28.178359 1326 log.go:172] (0xc00077e160) (0xc0005a0780) Create stream\nI0215 11:39:28.178595 1326 log.go:172] (0xc00077e160) (0xc0005a0780) Stream added, broadcasting: 1\nI0215 11:39:28.182789 1326 log.go:172] (0xc00077e160) Reply frame received for 1\nI0215 11:39:28.182819 1326 log.go:172] (0xc00077e160) (0xc0002ceb40) Create stream\nI0215 11:39:28.182825 1326 log.go:172] (0xc00077e160) (0xc0002ceb40) Stream added, broadcasting: 3\nI0215 11:39:28.183655 1326 log.go:172] (0xc00077e160) Reply frame received for 3\nI0215 11:39:28.183677 1326 log.go:172] (0xc00077e160) (0xc0005a0820) Create stream\nI0215 11:39:28.183683 1326 log.go:172] (0xc00077e160) (0xc0005a0820) Stream added, broadcasting: 5\nI0215 11:39:28.184762 1326 log.go:172] (0xc00077e160) Reply frame received for 5\nI0215 11:39:28.285803 1326 log.go:172] (0xc00077e160) Data frame received for 3\nI0215 11:39:28.285876 1326 log.go:172] (0xc0002ceb40) (3) Data frame handling\nI0215 11:39:28.285900 1326 log.go:172] (0xc0002ceb40) (3) Data frame sent\nI0215 11:39:28.414383 1326 log.go:172] (0xc00077e160) Data frame received for 1\nI0215 11:39:28.414589 1326 log.go:172] (0xc0005a0780) (1) Data frame handling\nI0215 11:39:28.414634 1326 log.go:172] (0xc0005a0780) (1) Data frame sent\nI0215 11:39:28.414668 1326 log.go:172] (0xc00077e160) (0xc0005a0780) Stream removed, broadcasting: 1\nI0215 11:39:28.415032 1326 log.go:172] (0xc00077e160) (0xc0002ceb40) Stream removed, broadcasting: 3\nI0215 11:39:28.415082 1326 log.go:172] (0xc00077e160) (0xc0005a0820) Stream removed, broadcasting: 5\nI0215 11:39:28.415184 1326 log.go:172] (0xc00077e160) Go away received\nI0215 11:39:28.416256 1326 log.go:172] (0xc00077e160) (0xc0005a0780) Stream removed, broadcasting: 1\nI0215 11:39:28.416274 1326 log.go:172] (0xc00077e160) (0xc0002ceb40) Stream removed, broadcasting: 3\nI0215 11:39:28.416285 1326 log.go:172] (0xc00077e160) (0xc0005a0820) Stream removed, broadcasting: 5\n" Feb 15 11:39:28.425: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 11:39:28.425: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 11:39:28.425: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 15 11:39:58.491: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mxn56 Feb 15 11:39:58.668: INFO: Scaling statefulset ss to 0 Feb 15 11:39:58.782: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 11:39:58.833: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:39:58.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-mxn56" for this suite. Feb 15 11:40:07.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:40:07.117: INFO: namespace: e2e-tests-statefulset-mxn56, resource: bindings, ignored listing per whitelist Feb 15 11:40:07.199: INFO: namespace e2e-tests-statefulset-mxn56 deletion completed in 8.274462379s • [SLOW TEST:105.604 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:40:07.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 11:40:07.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-g7hn9" to be "success or failure" Feb 15 11:40:07.425: INFO: Pod "downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 66.574117ms Feb 15 11:40:09.453: INFO: Pod "downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093963296s Feb 15 11:40:11.468: INFO: Pod "downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109449698s Feb 15 11:40:13.486: INFO: Pod "downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127602742s Feb 15 11:40:15.770: INFO: Pod "downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.411194539s Feb 15 11:40:17.801: INFO: Pod "downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.441883058s Feb 15 11:40:20.848: INFO: Pod "downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.488762271s STEP: Saw pod success Feb 15 11:40:20.848: INFO: Pod "downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:40:20.867: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 11:40:21.263: INFO: Waiting for pod downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007 to disappear Feb 15 11:40:21.276: INFO: Pod downwardapi-volume-ea312426-4fe7-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:40:21.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-g7hn9" for this suite. Feb 15 11:40:27.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:40:27.529: INFO: namespace: e2e-tests-projected-g7hn9, resource: bindings, ignored listing per whitelist Feb 15 11:40:27.556: INFO: namespace e2e-tests-projected-g7hn9 deletion completed in 6.269952083s • [SLOW TEST:20.357 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:40:27.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-6rsqg I0215 11:40:27.785005 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-6rsqg, replica count: 1 I0215 11:40:28.836465 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 11:40:29.837680 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 11:40:30.838664 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 11:40:31.839379 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 11:40:32.841111 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 11:40:33.842216 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 11:40:34.843251 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 11:40:35.844097 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 11:40:36.844634 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 11:40:37.845264 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 11:40:38.846399 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 11:40:39.847625 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 15 11:40:40.038: INFO: Created: latency-svc-62sxw Feb 15 11:40:40.149: INFO: Got endpoints: latency-svc-62sxw [200.72783ms] Feb 15 11:40:40.291: INFO: Created: latency-svc-2vgwv Feb 15 11:40:40.333: INFO: Got endpoints: latency-svc-2vgwv [180.188189ms] Feb 15 11:40:40.342: INFO: Created: latency-svc-wdjs7 Feb 15 11:40:40.364: INFO: Got endpoints: latency-svc-wdjs7 [212.198518ms] Feb 15 11:40:40.486: INFO: Created: latency-svc-cpbxh Feb 15 11:40:40.549: INFO: Created: latency-svc-t4ws5 Feb 15 11:40:40.551: INFO: Got endpoints: latency-svc-cpbxh [399.493153ms] Feb 15 11:40:40.803: INFO: Got endpoints: latency-svc-t4ws5 [649.490789ms] Feb 15 11:40:40.847: INFO: Created: latency-svc-9dvtg Feb 15 11:40:40.863: INFO: Got endpoints: latency-svc-9dvtg [711.601743ms] Feb 15 11:40:41.086: INFO: Created: latency-svc-zqfxh Feb 15 11:40:41.113: INFO: Got endpoints: latency-svc-zqfxh [960.197592ms] Feb 15 11:40:41.277: INFO: Created: latency-svc-z6kdr Feb 15 11:40:41.300: INFO: Got endpoints: latency-svc-z6kdr [1.147027779s] Feb 15 11:40:41.348: INFO: Created: latency-svc-66bcb Feb 15 11:40:41.485: INFO: Got endpoints: latency-svc-66bcb [1.333048887s] Feb 15 11:40:41.589: INFO: Created: latency-svc-t2wmz Feb 15 11:40:41.810: INFO: Got endpoints: latency-svc-t2wmz [1.658155597s] Feb 15 11:40:41.874: INFO: Created: latency-svc-vttj2 Feb 15 11:40:42.073: INFO: Got endpoints: latency-svc-vttj2 [1.919795624s] Feb 15 11:40:42.117: INFO: Created: latency-svc-7lg2z Feb 15 11:40:42.139: INFO: Got endpoints: latency-svc-7lg2z [1.987961095s] Feb 15 11:40:42.274: INFO: Created: latency-svc-mwcfp Feb 15 11:40:42.290: INFO: Got endpoints: latency-svc-mwcfp [2.137765947s] Feb 15 11:40:42.352: INFO: Created: latency-svc-pgdss Feb 15 11:40:42.484: INFO: Got endpoints: latency-svc-pgdss [2.330568118s] Feb 15 11:40:42.506: INFO: Created: latency-svc-plpnp Feb 15 11:40:42.528: INFO: Got endpoints: latency-svc-plpnp [2.374813449s] Feb 15 11:40:42.707: INFO: Created: latency-svc-59xl7 Feb 15 11:40:42.718: INFO: Got endpoints: latency-svc-59xl7 [2.56456851s] Feb 15 11:40:42.798: INFO: Created: latency-svc-96q64 Feb 15 11:40:42.908: INFO: Got endpoints: latency-svc-96q64 [2.57443178s] Feb 15 11:40:42.945: INFO: Created: latency-svc-j4vbf Feb 15 11:40:42.982: INFO: Got endpoints: latency-svc-j4vbf [2.617989712s] Feb 15 11:40:43.196: INFO: Created: latency-svc-bwl25 Feb 15 11:40:43.214: INFO: Got endpoints: latency-svc-bwl25 [2.662691759s] Feb 15 11:40:43.264: INFO: Created: latency-svc-pvpx8 Feb 15 11:40:43.284: INFO: Got endpoints: latency-svc-pvpx8 [2.481093853s] Feb 15 11:40:43.433: INFO: Created: latency-svc-lz8vw Feb 15 11:40:43.473: INFO: Got endpoints: latency-svc-lz8vw [2.608847203s] Feb 15 11:40:43.612: INFO: Created: latency-svc-w686m Feb 15 11:40:43.649: INFO: Got endpoints: latency-svc-w686m [2.535776483s] Feb 15 11:40:43.885: INFO: Created: latency-svc-gfl6l Feb 15 11:40:43.886: INFO: Got endpoints: latency-svc-gfl6l [2.585507454s] Feb 15 11:40:44.033: INFO: Created: latency-svc-85s86 Feb 15 11:40:44.065: INFO: Got endpoints: latency-svc-85s86 [2.579071528s] Feb 15 11:40:44.259: INFO: Created: latency-svc-hr5gh Feb 15 11:40:44.297: INFO: Got endpoints: latency-svc-hr5gh [2.486779496s] Feb 15 11:40:44.414: INFO: Created: latency-svc-pvbt6 Feb 15 11:40:44.442: INFO: Got endpoints: latency-svc-pvbt6 [2.368394715s] Feb 15 11:40:44.494: INFO: Created: latency-svc-km9b5 Feb 15 11:40:44.655: INFO: Got endpoints: latency-svc-km9b5 [2.515517155s] Feb 15 11:40:44.727: INFO: Created: latency-svc-fgzm5 Feb 15 11:40:44.892: INFO: Got endpoints: latency-svc-fgzm5 [2.602006068s] Feb 15 11:40:44.926: INFO: Created: latency-svc-dfg56 Feb 15 11:40:45.115: INFO: Got endpoints: latency-svc-dfg56 [2.630219222s] Feb 15 11:40:45.157: INFO: Created: latency-svc-67cf8 Feb 15 11:40:45.161: INFO: Got endpoints: latency-svc-67cf8 [2.633383304s] Feb 15 11:40:45.293: INFO: Created: latency-svc-vz7k4 Feb 15 11:40:45.318: INFO: Got endpoints: latency-svc-vz7k4 [2.600010589s] Feb 15 11:40:45.370: INFO: Created: latency-svc-qlvcq Feb 15 11:40:45.383: INFO: Got endpoints: latency-svc-qlvcq [2.47475122s] Feb 15 11:40:45.462: INFO: Created: latency-svc-t722n Feb 15 11:40:45.481: INFO: Got endpoints: latency-svc-t722n [2.498606198s] Feb 15 11:40:45.768: INFO: Created: latency-svc-kf97n Feb 15 11:40:45.770: INFO: Got endpoints: latency-svc-kf97n [2.556074336s] Feb 15 11:40:45.825: INFO: Created: latency-svc-pwljk Feb 15 11:40:45.958: INFO: Got endpoints: latency-svc-pwljk [2.673526853s] Feb 15 11:40:45.988: INFO: Created: latency-svc-hbnc4 Feb 15 11:40:46.036: INFO: Got endpoints: latency-svc-hbnc4 [2.562634625s] Feb 15 11:40:46.042: INFO: Created: latency-svc-jvczl Feb 15 11:40:46.154: INFO: Got endpoints: latency-svc-jvczl [2.504124959s] Feb 15 11:40:46.169: INFO: Created: latency-svc-b9zwl Feb 15 11:40:46.179: INFO: Got endpoints: latency-svc-b9zwl [2.293160391s] Feb 15 11:40:46.237: INFO: Created: latency-svc-bxvh4 Feb 15 11:40:46.247: INFO: Got endpoints: latency-svc-bxvh4 [2.182076479s] Feb 15 11:40:46.383: INFO: Created: latency-svc-8vrrw Feb 15 11:40:46.399: INFO: Got endpoints: latency-svc-8vrrw [2.101599058s] Feb 15 11:40:46.435: INFO: Created: latency-svc-cfdkw Feb 15 11:40:46.455: INFO: Got endpoints: latency-svc-cfdkw [2.013251831s] Feb 15 11:40:46.583: INFO: Created: latency-svc-xm56d Feb 15 11:40:46.614: INFO: Got endpoints: latency-svc-xm56d [1.958117866s] Feb 15 11:40:46.653: INFO: Created: latency-svc-g2x76 Feb 15 11:40:46.669: INFO: Got endpoints: latency-svc-g2x76 [1.776045175s] Feb 15 11:40:46.922: INFO: Created: latency-svc-kfrhq Feb 15 11:40:46.950: INFO: Got endpoints: latency-svc-kfrhq [1.835168636s] Feb 15 11:40:47.088: INFO: Created: latency-svc-gdfjb Feb 15 11:40:47.116: INFO: Got endpoints: latency-svc-gdfjb [1.954225639s] Feb 15 11:40:47.160: INFO: Created: latency-svc-qnt25 Feb 15 11:40:47.295: INFO: Got endpoints: latency-svc-qnt25 [1.976610226s] Feb 15 11:40:47.331: INFO: Created: latency-svc-w64cc Feb 15 11:40:47.333: INFO: Got endpoints: latency-svc-w64cc [1.949393509s] Feb 15 11:40:47.404: INFO: Created: latency-svc-8qsk8 Feb 15 11:40:47.724: INFO: Got endpoints: latency-svc-8qsk8 [2.2425661s] Feb 15 11:40:47.808: INFO: Created: latency-svc-bhj4z Feb 15 11:40:47.991: INFO: Got endpoints: latency-svc-bhj4z [2.220180279s] Feb 15 11:40:48.073: INFO: Created: latency-svc-ckblk Feb 15 11:40:48.073: INFO: Got endpoints: latency-svc-ckblk [2.115079097s] Feb 15 11:40:48.189: INFO: Created: latency-svc-kbqv4 Feb 15 11:40:48.200: INFO: Got endpoints: latency-svc-kbqv4 [208.923321ms] Feb 15 11:40:48.273: INFO: Created: latency-svc-56tp6 Feb 15 11:40:48.427: INFO: Got endpoints: latency-svc-56tp6 [2.391218063s] Feb 15 11:40:48.458: INFO: Created: latency-svc-9b7zp Feb 15 11:40:48.487: INFO: Got endpoints: latency-svc-9b7zp [2.332415332s] Feb 15 11:40:48.638: INFO: Created: latency-svc-2lstb Feb 15 11:40:48.673: INFO: Got endpoints: latency-svc-2lstb [2.49370047s] Feb 15 11:40:48.876: INFO: Created: latency-svc-sl584 Feb 15 11:40:48.892: INFO: Got endpoints: latency-svc-sl584 [2.644809688s] Feb 15 11:40:49.038: INFO: Created: latency-svc-vkhpc Feb 15 11:40:49.057: INFO: Got endpoints: latency-svc-vkhpc [2.656961119s] Feb 15 11:40:49.299: INFO: Created: latency-svc-hgm62 Feb 15 11:40:49.301: INFO: Got endpoints: latency-svc-hgm62 [2.845728399s] Feb 15 11:40:49.387: INFO: Created: latency-svc-nl47w Feb 15 11:40:49.460: INFO: Got endpoints: latency-svc-nl47w [2.845986641s] Feb 15 11:40:49.494: INFO: Created: latency-svc-bd9j9 Feb 15 11:40:49.534: INFO: Got endpoints: latency-svc-bd9j9 [2.865209731s] Feb 15 11:40:49.724: INFO: Created: latency-svc-trhdw Feb 15 11:40:50.047: INFO: Got endpoints: latency-svc-trhdw [3.096356036s] Feb 15 11:40:50.092: INFO: Created: latency-svc-hxh62 Feb 15 11:40:50.131: INFO: Got endpoints: latency-svc-hxh62 [3.015064535s] Feb 15 11:40:50.332: INFO: Created: latency-svc-pv5qd Feb 15 11:40:50.554: INFO: Got endpoints: latency-svc-pv5qd [3.258725988s] Feb 15 11:40:50.631: INFO: Created: latency-svc-gk9br Feb 15 11:40:50.805: INFO: Got endpoints: latency-svc-gk9br [3.472001619s] Feb 15 11:40:50.837: INFO: Created: latency-svc-x7vtv Feb 15 11:40:50.871: INFO: Got endpoints: latency-svc-x7vtv [3.146430452s] Feb 15 11:40:51.058: INFO: Created: latency-svc-bnwm4 Feb 15 11:40:51.078: INFO: Got endpoints: latency-svc-bnwm4 [3.004642647s] Feb 15 11:40:51.170: INFO: Created: latency-svc-55fls Feb 15 11:40:51.285: INFO: Got endpoints: latency-svc-55fls [3.085239873s] Feb 15 11:40:51.329: INFO: Created: latency-svc-v9968 Feb 15 11:40:51.345: INFO: Got endpoints: latency-svc-v9968 [2.916945154s] Feb 15 11:40:51.507: INFO: Created: latency-svc-ppxgw Feb 15 11:40:51.522: INFO: Got endpoints: latency-svc-ppxgw [3.035088569s] Feb 15 11:40:51.596: INFO: Created: latency-svc-hgr4h Feb 15 11:40:51.727: INFO: Got endpoints: latency-svc-hgr4h [3.053725779s] Feb 15 11:40:51.798: INFO: Created: latency-svc-t5lmq Feb 15 11:40:51.801: INFO: Got endpoints: latency-svc-t5lmq [2.90847928s] Feb 15 11:40:51.972: INFO: Created: latency-svc-rrnbq Feb 15 11:40:51.987: INFO: Got endpoints: latency-svc-rrnbq [2.929942777s] Feb 15 11:40:52.194: INFO: Created: latency-svc-55fp2 Feb 15 11:40:52.226: INFO: Got endpoints: latency-svc-55fp2 [2.924972623s] Feb 15 11:40:52.358: INFO: Created: latency-svc-7zbd6 Feb 15 11:40:52.409: INFO: Got endpoints: latency-svc-7zbd6 [2.948146118s] Feb 15 11:40:54.152: INFO: Created: latency-svc-vfzhf Feb 15 11:40:54.174: INFO: Got endpoints: latency-svc-vfzhf [4.639742659s] Feb 15 11:40:54.560: INFO: Created: latency-svc-gdhnq Feb 15 11:40:55.082: INFO: Got endpoints: latency-svc-gdhnq [5.034034987s] Feb 15 11:40:55.150: INFO: Created: latency-svc-4xhff Feb 15 11:40:55.182: INFO: Got endpoints: latency-svc-4xhff [5.049385459s] Feb 15 11:40:55.390: INFO: Created: latency-svc-7mvcc Feb 15 11:40:55.410: INFO: Got endpoints: latency-svc-7mvcc [4.854892583s] Feb 15 11:40:55.590: INFO: Created: latency-svc-5t5mr Feb 15 11:40:55.604: INFO: Got endpoints: latency-svc-5t5mr [4.79859251s] Feb 15 11:40:55.792: INFO: Created: latency-svc-n8cnq Feb 15 11:40:55.807: INFO: Got endpoints: latency-svc-n8cnq [4.934785761s] Feb 15 11:40:56.086: INFO: Created: latency-svc-q6jhh Feb 15 11:40:56.107: INFO: Got endpoints: latency-svc-q6jhh [5.028997038s] Feb 15 11:40:56.315: INFO: Created: latency-svc-9n66l Feb 15 11:40:56.338: INFO: Got endpoints: latency-svc-9n66l [5.052140528s] Feb 15 11:40:56.592: INFO: Created: latency-svc-8zmdr Feb 15 11:40:56.621: INFO: Got endpoints: latency-svc-8zmdr [5.276329049s] Feb 15 11:40:56.780: INFO: Created: latency-svc-p2d7p Feb 15 11:40:56.834: INFO: Got endpoints: latency-svc-p2d7p [5.311798446s] Feb 15 11:40:56.986: INFO: Created: latency-svc-jtk7c Feb 15 11:40:57.010: INFO: Got endpoints: latency-svc-jtk7c [5.282030072s] Feb 15 11:40:57.186: INFO: Created: latency-svc-gsgd6 Feb 15 11:40:57.297: INFO: Got endpoints: latency-svc-gsgd6 [5.495742578s] Feb 15 11:40:57.482: INFO: Created: latency-svc-9r2zs Feb 15 11:40:57.513: INFO: Got endpoints: latency-svc-9r2zs [5.525133868s] Feb 15 11:40:57.662: INFO: Created: latency-svc-wcwh6 Feb 15 11:40:57.680: INFO: Got endpoints: latency-svc-wcwh6 [5.452711191s] Feb 15 11:40:57.829: INFO: Created: latency-svc-j8thp Feb 15 11:40:57.846: INFO: Got endpoints: latency-svc-j8thp [5.436386986s] Feb 15 11:40:57.973: INFO: Created: latency-svc-k8688 Feb 15 11:40:57.986: INFO: Got endpoints: latency-svc-k8688 [3.811225191s] Feb 15 11:40:58.059: INFO: Created: latency-svc-lc9dq Feb 15 11:40:58.175: INFO: Got endpoints: latency-svc-lc9dq [3.09207309s] Feb 15 11:40:58.458: INFO: Created: latency-svc-s5zcz Feb 15 11:40:58.495: INFO: Got endpoints: latency-svc-s5zcz [3.312787414s] Feb 15 11:40:59.472: INFO: Created: latency-svc-6hnsz Feb 15 11:40:59.683: INFO: Got endpoints: latency-svc-6hnsz [4.273503665s] Feb 15 11:40:59.982: INFO: Created: latency-svc-qhwh5 Feb 15 11:41:00.113: INFO: Got endpoints: latency-svc-qhwh5 [4.508659742s] Feb 15 11:41:00.134: INFO: Created: latency-svc-9svqm Feb 15 11:41:00.244: INFO: Got endpoints: latency-svc-9svqm [4.43697833s] Feb 15 11:41:00.297: INFO: Created: latency-svc-2nnjq Feb 15 11:41:00.304: INFO: Got endpoints: latency-svc-2nnjq [4.195828415s] Feb 15 11:41:00.448: INFO: Created: latency-svc-8qt74 Feb 15 11:41:00.483: INFO: Got endpoints: latency-svc-8qt74 [4.144061352s] Feb 15 11:41:00.571: INFO: Created: latency-svc-p77t9 Feb 15 11:41:00.762: INFO: Created: latency-svc-xnvpq Feb 15 11:41:00.839: INFO: Got endpoints: latency-svc-p77t9 [4.21704494s] Feb 15 11:41:00.846: INFO: Got endpoints: latency-svc-xnvpq [4.011846333s] Feb 15 11:41:00.905: INFO: Created: latency-svc-rk7jk Feb 15 11:41:01.075: INFO: Got endpoints: latency-svc-rk7jk [4.064867378s] Feb 15 11:41:01.112: INFO: Created: latency-svc-tz74m Feb 15 11:41:01.115: INFO: Got endpoints: latency-svc-tz74m [3.816938348s] Feb 15 11:41:01.170: INFO: Created: latency-svc-q4dvk Feb 15 11:41:01.261: INFO: Got endpoints: latency-svc-q4dvk [3.747724861s] Feb 15 11:41:01.328: INFO: Created: latency-svc-7cs57 Feb 15 11:41:01.338: INFO: Got endpoints: latency-svc-7cs57 [3.658592237s] Feb 15 11:41:01.492: INFO: Created: latency-svc-fpbhv Feb 15 11:41:01.496: INFO: Got endpoints: latency-svc-fpbhv [3.649868303s] Feb 15 11:41:01.564: INFO: Created: latency-svc-ghsld Feb 15 11:41:01.670: INFO: Got endpoints: latency-svc-ghsld [3.684138996s] Feb 15 11:41:01.691: INFO: Created: latency-svc-7g2v5 Feb 15 11:41:01.705: INFO: Got endpoints: latency-svc-7g2v5 [3.52955288s] Feb 15 11:41:01.908: INFO: Created: latency-svc-lpvgj Feb 15 11:41:01.924: INFO: Got endpoints: latency-svc-lpvgj [3.428226352s] Feb 15 11:41:02.121: INFO: Created: latency-svc-l9dhc Feb 15 11:41:02.195: INFO: Created: latency-svc-ng2th Feb 15 11:41:02.195: INFO: Got endpoints: latency-svc-l9dhc [2.511570682s] Feb 15 11:41:02.359: INFO: Got endpoints: latency-svc-ng2th [2.245075211s] Feb 15 11:41:02.427: INFO: Created: latency-svc-lrcjt Feb 15 11:41:02.427: INFO: Got endpoints: latency-svc-lrcjt [2.182689649s] Feb 15 11:41:02.599: INFO: Created: latency-svc-ftnxc Feb 15 11:41:02.619: INFO: Got endpoints: latency-svc-ftnxc [2.31490219s] Feb 15 11:41:02.765: INFO: Created: latency-svc-7r9gm Feb 15 11:41:02.812: INFO: Got endpoints: latency-svc-7r9gm [2.328738776s] Feb 15 11:41:02.997: INFO: Created: latency-svc-j4vsd Feb 15 11:41:03.007: INFO: Got endpoints: latency-svc-j4vsd [2.167965502s] Feb 15 11:41:03.171: INFO: Created: latency-svc-rxf7m Feb 15 11:41:03.199: INFO: Got endpoints: latency-svc-rxf7m [2.353074593s] Feb 15 11:41:03.324: INFO: Created: latency-svc-6gtzm Feb 15 11:41:03.346: INFO: Got endpoints: latency-svc-6gtzm [2.270410844s] Feb 15 11:41:03.429: INFO: Created: latency-svc-2q4q4 Feb 15 11:41:03.515: INFO: Got endpoints: latency-svc-2q4q4 [2.399804423s] Feb 15 11:41:03.540: INFO: Created: latency-svc-bs9hx Feb 15 11:41:03.556: INFO: Got endpoints: latency-svc-bs9hx [2.295403836s] Feb 15 11:41:03.783: INFO: Created: latency-svc-z7dt5 Feb 15 11:41:03.820: INFO: Got endpoints: latency-svc-z7dt5 [2.481723769s] Feb 15 11:41:03.944: INFO: Created: latency-svc-j89qz Feb 15 11:41:03.968: INFO: Got endpoints: latency-svc-j89qz [2.472718182s] Feb 15 11:41:04.018: INFO: Created: latency-svc-ht779 Feb 15 11:41:04.152: INFO: Got endpoints: latency-svc-ht779 [2.480673994s] Feb 15 11:41:04.179: INFO: Created: latency-svc-255st Feb 15 11:41:04.312: INFO: Got endpoints: latency-svc-255st [2.607089005s] Feb 15 11:41:04.318: INFO: Created: latency-svc-tlf47 Feb 15 11:41:04.342: INFO: Got endpoints: latency-svc-tlf47 [2.41683826s] Feb 15 11:41:04.594: INFO: Created: latency-svc-9g6sv Feb 15 11:41:04.623: INFO: Got endpoints: latency-svc-9g6sv [2.427645412s] Feb 15 11:41:04.797: INFO: Created: latency-svc-mqg7x Feb 15 11:41:04.806: INFO: Got endpoints: latency-svc-mqg7x [2.447258896s] Feb 15 11:41:05.035: INFO: Created: latency-svc-56g6m Feb 15 11:41:05.074: INFO: Got endpoints: latency-svc-56g6m [2.646418394s] Feb 15 11:41:05.257: INFO: Created: latency-svc-nznsb Feb 15 11:41:05.297: INFO: Got endpoints: latency-svc-nznsb [2.678326019s] Feb 15 11:41:05.489: INFO: Created: latency-svc-6ntnq Feb 15 11:41:05.516: INFO: Got endpoints: latency-svc-6ntnq [2.702930346s] Feb 15 11:41:05.663: INFO: Created: latency-svc-sfl5n Feb 15 11:41:05.679: INFO: Got endpoints: latency-svc-sfl5n [2.671756396s] Feb 15 11:41:05.904: INFO: Created: latency-svc-87fwf Feb 15 11:41:05.913: INFO: Got endpoints: latency-svc-87fwf [2.713063355s] Feb 15 11:41:06.189: INFO: Created: latency-svc-x8xch Feb 15 11:41:06.241: INFO: Got endpoints: latency-svc-x8xch [2.894945484s] Feb 15 11:41:06.249: INFO: Created: latency-svc-ngsjh Feb 15 11:41:06.414: INFO: Got endpoints: latency-svc-ngsjh [2.898841996s] Feb 15 11:41:06.478: INFO: Created: latency-svc-ssbnk Feb 15 11:41:06.504: INFO: Got endpoints: latency-svc-ssbnk [2.947036666s] Feb 15 11:41:06.718: INFO: Created: latency-svc-czbjl Feb 15 11:41:06.735: INFO: Got endpoints: latency-svc-czbjl [2.914213694s] Feb 15 11:41:07.700: INFO: Created: latency-svc-hv4td Feb 15 11:41:07.754: INFO: Got endpoints: latency-svc-hv4td [3.785350133s] Feb 15 11:41:07.939: INFO: Created: latency-svc-hb4bp Feb 15 11:41:07.971: INFO: Got endpoints: latency-svc-hb4bp [3.818438507s] Feb 15 11:41:08.021: INFO: Created: latency-svc-6qzqt Feb 15 11:41:08.117: INFO: Got endpoints: latency-svc-6qzqt [3.804378916s] Feb 15 11:41:08.186: INFO: Created: latency-svc-d27cd Feb 15 11:41:08.375: INFO: Got endpoints: latency-svc-d27cd [4.032621302s] Feb 15 11:41:08.397: INFO: Created: latency-svc-jthc8 Feb 15 11:41:08.555: INFO: Got endpoints: latency-svc-jthc8 [3.931395639s] Feb 15 11:41:08.605: INFO: Created: latency-svc-nzwrf Feb 15 11:41:08.611: INFO: Got endpoints: latency-svc-nzwrf [3.804877625s] Feb 15 11:41:08.829: INFO: Created: latency-svc-7n2mb Feb 15 11:41:08.854: INFO: Got endpoints: latency-svc-7n2mb [3.7796353s] Feb 15 11:41:08.921: INFO: Created: latency-svc-bzjgn Feb 15 11:41:09.010: INFO: Got endpoints: latency-svc-bzjgn [3.712289671s] Feb 15 11:41:09.089: INFO: Created: latency-svc-ff628 Feb 15 11:41:09.199: INFO: Got endpoints: latency-svc-ff628 [3.683046793s] Feb 15 11:41:09.238: INFO: Created: latency-svc-687m5 Feb 15 11:41:09.266: INFO: Got endpoints: latency-svc-687m5 [3.587095509s] Feb 15 11:41:09.299: INFO: Created: latency-svc-n2d55 Feb 15 11:41:09.417: INFO: Got endpoints: latency-svc-n2d55 [3.504024781s] Feb 15 11:41:09.498: INFO: Created: latency-svc-rtsws Feb 15 11:41:09.601: INFO: Got endpoints: latency-svc-rtsws [3.359761229s] Feb 15 11:41:09.651: INFO: Created: latency-svc-xbs5h Feb 15 11:41:09.651: INFO: Got endpoints: latency-svc-xbs5h [3.236402943s] Feb 15 11:41:09.847: INFO: Created: latency-svc-vfx9z Feb 15 11:41:09.863: INFO: Got endpoints: latency-svc-vfx9z [3.358720114s] Feb 15 11:41:09.929: INFO: Created: latency-svc-vxssv Feb 15 11:41:10.078: INFO: Got endpoints: latency-svc-vxssv [3.342885745s] Feb 15 11:41:10.129: INFO: Created: latency-svc-5zsh2 Feb 15 11:41:10.141: INFO: Got endpoints: latency-svc-5zsh2 [2.38644107s] Feb 15 11:41:10.277: INFO: Created: latency-svc-w5ksh Feb 15 11:41:10.290: INFO: Got endpoints: latency-svc-w5ksh [2.318548359s] Feb 15 11:41:10.341: INFO: Created: latency-svc-52wzr Feb 15 11:41:10.352: INFO: Got endpoints: latency-svc-52wzr [2.234475257s] Feb 15 11:41:10.687: INFO: Created: latency-svc-gqhp5 Feb 15 11:41:10.691: INFO: Got endpoints: latency-svc-gqhp5 [2.315948497s] Feb 15 11:41:10.692: INFO: Created: latency-svc-ztwvl Feb 15 11:41:10.692: INFO: Got endpoints: latency-svc-ztwvl [2.135844217s] Feb 15 11:41:10.869: INFO: Created: latency-svc-dfx8v Feb 15 11:41:10.920: INFO: Created: latency-svc-nkrx4 Feb 15 11:41:10.933: INFO: Got endpoints: latency-svc-dfx8v [2.321644207s] Feb 15 11:41:10.938: INFO: Got endpoints: latency-svc-nkrx4 [2.083154738s] Feb 15 11:41:11.054: INFO: Created: latency-svc-c7hnw Feb 15 11:41:11.065: INFO: Got endpoints: latency-svc-c7hnw [2.054485783s] Feb 15 11:41:11.234: INFO: Created: latency-svc-d4fh5 Feb 15 11:41:11.244: INFO: Got endpoints: latency-svc-d4fh5 [2.045175143s] Feb 15 11:41:11.320: INFO: Created: latency-svc-v58l4 Feb 15 11:41:11.428: INFO: Got endpoints: latency-svc-v58l4 [2.161822807s] Feb 15 11:41:11.461: INFO: Created: latency-svc-c2hjh Feb 15 11:41:11.479: INFO: Got endpoints: latency-svc-c2hjh [2.061933743s] Feb 15 11:41:11.626: INFO: Created: latency-svc-c7pm6 Feb 15 11:41:11.713: INFO: Got endpoints: latency-svc-c7pm6 [2.111087445s] Feb 15 11:41:11.716: INFO: Created: latency-svc-55bl2 Feb 15 11:41:11.830: INFO: Got endpoints: latency-svc-55bl2 [2.179026246s] Feb 15 11:41:11.862: INFO: Created: latency-svc-l2q4z Feb 15 11:41:11.888: INFO: Got endpoints: latency-svc-l2q4z [2.02424305s] Feb 15 11:41:12.066: INFO: Created: latency-svc-skm8m Feb 15 11:41:12.080: INFO: Got endpoints: latency-svc-skm8m [2.001110891s] Feb 15 11:41:12.235: INFO: Created: latency-svc-5mddk Feb 15 11:41:12.264: INFO: Got endpoints: latency-svc-5mddk [2.122718538s] Feb 15 11:41:12.434: INFO: Created: latency-svc-7qzdz Feb 15 11:41:12.472: INFO: Got endpoints: latency-svc-7qzdz [2.181334188s] Feb 15 11:41:12.644: INFO: Created: latency-svc-9kd2m Feb 15 11:41:12.715: INFO: Got endpoints: latency-svc-9kd2m [2.362588504s] Feb 15 11:41:12.729: INFO: Created: latency-svc-mkb6k Feb 15 11:41:12.863: INFO: Got endpoints: latency-svc-mkb6k [2.17233393s] Feb 15 11:41:12.876: INFO: Created: latency-svc-5zqss Feb 15 11:41:12.906: INFO: Got endpoints: latency-svc-5zqss [2.214003887s] Feb 15 11:41:13.061: INFO: Created: latency-svc-hk5jw Feb 15 11:41:13.089: INFO: Got endpoints: latency-svc-hk5jw [2.155186755s] Feb 15 11:41:13.254: INFO: Created: latency-svc-lqhvq Feb 15 11:41:13.275: INFO: Got endpoints: latency-svc-lqhvq [2.337467106s] Feb 15 11:41:13.339: INFO: Created: latency-svc-rkm2q Feb 15 11:41:13.440: INFO: Got endpoints: latency-svc-rkm2q [2.374354054s] Feb 15 11:41:13.473: INFO: Created: latency-svc-kxwxf Feb 15 11:41:13.497: INFO: Got endpoints: latency-svc-kxwxf [2.252294582s] Feb 15 11:41:13.647: INFO: Created: latency-svc-67xcc Feb 15 11:41:13.679: INFO: Got endpoints: latency-svc-67xcc [2.250639592s] Feb 15 11:41:13.739: INFO: Created: latency-svc-mg98l Feb 15 11:41:13.923: INFO: Got endpoints: latency-svc-mg98l [2.443968721s] Feb 15 11:41:13.925: INFO: Created: latency-svc-4cj9j Feb 15 11:41:13.941: INFO: Got endpoints: latency-svc-4cj9j [2.228240194s] Feb 15 11:41:14.094: INFO: Created: latency-svc-trk7f Feb 15 11:41:14.139: INFO: Got endpoints: latency-svc-trk7f [2.308205662s] Feb 15 11:41:14.299: INFO: Created: latency-svc-dxn4z Feb 15 11:41:14.309: INFO: Got endpoints: latency-svc-dxn4z [2.42066482s] Feb 15 11:41:14.454: INFO: Created: latency-svc-nkcsr Feb 15 11:41:14.481: INFO: Got endpoints: latency-svc-nkcsr [2.400962117s] Feb 15 11:41:14.532: INFO: Created: latency-svc-snzq6 Feb 15 11:41:14.627: INFO: Got endpoints: latency-svc-snzq6 [2.362037841s] Feb 15 11:41:14.678: INFO: Created: latency-svc-g5dvx Feb 15 11:41:14.694: INFO: Got endpoints: latency-svc-g5dvx [2.221450971s] Feb 15 11:41:14.811: INFO: Created: latency-svc-7tz66 Feb 15 11:41:14.825: INFO: Got endpoints: latency-svc-7tz66 [2.110170622s] Feb 15 11:41:14.879: INFO: Created: latency-svc-k42cs Feb 15 11:41:14.885: INFO: Got endpoints: latency-svc-k42cs [2.02142685s] Feb 15 11:41:15.003: INFO: Created: latency-svc-49kbt Feb 15 11:41:15.014: INFO: Got endpoints: latency-svc-49kbt [2.107410274s] Feb 15 11:41:15.106: INFO: Created: latency-svc-f7k6w Feb 15 11:41:15.194: INFO: Got endpoints: latency-svc-f7k6w [2.104718294s] Feb 15 11:41:15.227: INFO: Created: latency-svc-hdb6s Feb 15 11:41:15.237: INFO: Got endpoints: latency-svc-hdb6s [1.961346649s] Feb 15 11:41:15.295: INFO: Created: latency-svc-nvlrv Feb 15 11:41:15.429: INFO: Got endpoints: latency-svc-nvlrv [1.988919514s] Feb 15 11:41:15.433: INFO: Created: latency-svc-bb5bm Feb 15 11:41:15.442: INFO: Got endpoints: latency-svc-bb5bm [1.945083002s] Feb 15 11:41:15.564: INFO: Created: latency-svc-c6mhr Feb 15 11:41:15.577: INFO: Got endpoints: latency-svc-c6mhr [1.897053369s] Feb 15 11:41:15.612: INFO: Created: latency-svc-9gmmc Feb 15 11:41:15.630: INFO: Got endpoints: latency-svc-9gmmc [1.706275146s] Feb 15 11:41:15.829: INFO: Created: latency-svc-sznzn Feb 15 11:41:15.852: INFO: Got endpoints: latency-svc-sznzn [1.910369101s] Feb 15 11:41:16.042: INFO: Created: latency-svc-6s8tb Feb 15 11:41:16.053: INFO: Got endpoints: latency-svc-6s8tb [1.913120044s] Feb 15 11:41:16.113: INFO: Created: latency-svc-sj5xp Feb 15 11:41:16.211: INFO: Got endpoints: latency-svc-sj5xp [1.901014922s] Feb 15 11:41:16.261: INFO: Created: latency-svc-ww9s2 Feb 15 11:41:16.300: INFO: Got endpoints: latency-svc-ww9s2 [1.818958607s] Feb 15 11:41:16.312: INFO: Created: latency-svc-gxdsw Feb 15 11:41:16.404: INFO: Got endpoints: latency-svc-gxdsw [1.776688092s] Feb 15 11:41:16.455: INFO: Created: latency-svc-qrbfs Feb 15 11:41:16.482: INFO: Got endpoints: latency-svc-qrbfs [1.787904699s] Feb 15 11:41:16.668: INFO: Created: latency-svc-tfqbb Feb 15 11:41:16.696: INFO: Got endpoints: latency-svc-tfqbb [1.870962348s] Feb 15 11:41:16.888: INFO: Created: latency-svc-qgf65 Feb 15 11:41:17.081: INFO: Created: latency-svc-k2pj8 Feb 15 11:41:17.082: INFO: Got endpoints: latency-svc-qgf65 [2.196544492s] Feb 15 11:41:17.109: INFO: Got endpoints: latency-svc-k2pj8 [2.095154654s] Feb 15 11:41:17.178: INFO: Created: latency-svc-c44vt Feb 15 11:41:17.325: INFO: Got endpoints: latency-svc-c44vt [2.130952061s] Feb 15 11:41:17.356: INFO: Created: latency-svc-7h24q Feb 15 11:41:17.369: INFO: Got endpoints: latency-svc-7h24q [2.132181417s] Feb 15 11:41:17.498: INFO: Created: latency-svc-64rrf Feb 15 11:41:17.763: INFO: Got endpoints: latency-svc-64rrf [2.333123962s] Feb 15 11:41:17.766: INFO: Created: latency-svc-7pqsr Feb 15 11:41:17.814: INFO: Got endpoints: latency-svc-7pqsr [2.371392169s] Feb 15 11:41:17.815: INFO: Latencies: [180.188189ms 208.923321ms 212.198518ms 399.493153ms 649.490789ms 711.601743ms 960.197592ms 1.147027779s 1.333048887s 1.658155597s 1.706275146s 1.776045175s 1.776688092s 1.787904699s 1.818958607s 1.835168636s 1.870962348s 1.897053369s 1.901014922s 1.910369101s 1.913120044s 1.919795624s 1.945083002s 1.949393509s 1.954225639s 1.958117866s 1.961346649s 1.976610226s 1.987961095s 1.988919514s 2.001110891s 2.013251831s 2.02142685s 2.02424305s 2.045175143s 2.054485783s 2.061933743s 2.083154738s 2.095154654s 2.101599058s 2.104718294s 2.107410274s 2.110170622s 2.111087445s 2.115079097s 2.122718538s 2.130952061s 2.132181417s 2.135844217s 2.137765947s 2.155186755s 2.161822807s 2.167965502s 2.17233393s 2.179026246s 2.181334188s 2.182076479s 2.182689649s 2.196544492s 2.214003887s 2.220180279s 2.221450971s 2.228240194s 2.234475257s 2.2425661s 2.245075211s 2.250639592s 2.252294582s 2.270410844s 2.293160391s 2.295403836s 2.308205662s 2.31490219s 2.315948497s 2.318548359s 2.321644207s 2.328738776s 2.330568118s 2.332415332s 2.333123962s 2.337467106s 2.353074593s 2.362037841s 2.362588504s 2.368394715s 2.371392169s 2.374354054s 2.374813449s 2.38644107s 2.391218063s 2.399804423s 2.400962117s 2.41683826s 2.42066482s 2.427645412s 2.443968721s 2.447258896s 2.472718182s 2.47475122s 2.480673994s 2.481093853s 2.481723769s 2.486779496s 2.49370047s 2.498606198s 2.504124959s 2.511570682s 2.515517155s 2.535776483s 2.556074336s 2.562634625s 2.56456851s 2.57443178s 2.579071528s 2.585507454s 2.600010589s 2.602006068s 2.607089005s 2.608847203s 2.617989712s 2.630219222s 2.633383304s 2.644809688s 2.646418394s 2.656961119s 2.662691759s 2.671756396s 2.673526853s 2.678326019s 2.702930346s 2.713063355s 2.845728399s 2.845986641s 2.865209731s 2.894945484s 2.898841996s 2.90847928s 2.914213694s 2.916945154s 2.924972623s 2.929942777s 2.947036666s 2.948146118s 3.004642647s 3.015064535s 3.035088569s 3.053725779s 3.085239873s 3.09207309s 3.096356036s 3.146430452s 3.236402943s 3.258725988s 3.312787414s 3.342885745s 3.358720114s 3.359761229s 3.428226352s 3.472001619s 3.504024781s 3.52955288s 3.587095509s 3.649868303s 3.658592237s 3.683046793s 3.684138996s 3.712289671s 3.747724861s 3.7796353s 3.785350133s 3.804378916s 3.804877625s 3.811225191s 3.816938348s 3.818438507s 3.931395639s 4.011846333s 4.032621302s 4.064867378s 4.144061352s 4.195828415s 4.21704494s 4.273503665s 4.43697833s 4.508659742s 4.639742659s 4.79859251s 4.854892583s 4.934785761s 5.028997038s 5.034034987s 5.049385459s 5.052140528s 5.276329049s 5.282030072s 5.311798446s 5.436386986s 5.452711191s 5.495742578s 5.525133868s] Feb 15 11:41:17.815: INFO: 50 %ile: 2.481093853s Feb 15 11:41:17.815: INFO: 90 %ile: 4.195828415s Feb 15 11:41:17.815: INFO: 99 %ile: 5.495742578s Feb 15 11:41:17.815: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:41:17.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-6rsqg" for this suite. Feb 15 11:42:14.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:42:14.345: INFO: namespace: e2e-tests-svc-latency-6rsqg, resource: bindings, ignored listing per whitelist Feb 15 11:42:14.380: INFO: namespace e2e-tests-svc-latency-6rsqg deletion completed in 56.293218002s • [SLOW TEST:106.823 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:42:14.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-tsc75 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Feb 15 11:42:14.703: INFO: Found 0 stateful pods, waiting for 3 Feb 15 11:42:24.813: INFO: Found 2 stateful pods, waiting for 3 Feb 15 11:42:34.716: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 15 11:42:34.716: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 15 11:42:34.716: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 15 11:42:44.717: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 15 11:42:44.717: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 15 11:42:44.717: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 15 11:42:44.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tsc75 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 11:42:45.309: INFO: stderr: "I0215 11:42:44.935608 1347 log.go:172] (0xc0006ee2c0) (0xc000714640) Create stream\nI0215 11:42:44.936008 1347 log.go:172] (0xc0006ee2c0) (0xc000714640) Stream added, broadcasting: 1\nI0215 11:42:44.944050 1347 log.go:172] (0xc0006ee2c0) Reply frame received for 1\nI0215 11:42:44.944125 1347 log.go:172] (0xc0006ee2c0) (0xc0004e6d20) Create stream\nI0215 11:42:44.944143 1347 log.go:172] (0xc0006ee2c0) (0xc0004e6d20) Stream added, broadcasting: 3\nI0215 11:42:44.945701 1347 log.go:172] (0xc0006ee2c0) Reply frame received for 3\nI0215 11:42:44.945753 1347 log.go:172] (0xc0006ee2c0) (0xc0004e6e60) Create stream\nI0215 11:42:44.945777 1347 log.go:172] (0xc0006ee2c0) (0xc0004e6e60) Stream added, broadcasting: 5\nI0215 11:42:44.947749 1347 log.go:172] (0xc0006ee2c0) Reply frame received for 5\nI0215 11:42:45.182635 1347 log.go:172] (0xc0006ee2c0) Data frame received for 3\nI0215 11:42:45.182767 1347 log.go:172] (0xc0004e6d20) (3) Data frame handling\nI0215 11:42:45.182804 1347 log.go:172] (0xc0004e6d20) (3) Data frame sent\nI0215 11:42:45.297882 1347 log.go:172] (0xc0006ee2c0) (0xc0004e6e60) Stream removed, broadcasting: 5\nI0215 11:42:45.298044 1347 log.go:172] (0xc0006ee2c0) Data frame received for 1\nI0215 11:42:45.298089 1347 log.go:172] (0xc0006ee2c0) (0xc0004e6d20) Stream removed, broadcasting: 3\nI0215 11:42:45.298155 1347 log.go:172] (0xc000714640) (1) Data frame handling\nI0215 11:42:45.298324 1347 log.go:172] (0xc000714640) (1) Data frame sent\nI0215 11:42:45.298343 1347 log.go:172] (0xc0006ee2c0) (0xc000714640) Stream removed, broadcasting: 1\nI0215 11:42:45.298373 1347 log.go:172] (0xc0006ee2c0) Go away received\nI0215 11:42:45.299360 1347 log.go:172] (0xc0006ee2c0) (0xc000714640) Stream removed, broadcasting: 1\nI0215 11:42:45.299379 1347 log.go:172] (0xc0006ee2c0) (0xc0004e6d20) Stream removed, broadcasting: 3\nI0215 11:42:45.299388 1347 log.go:172] (0xc0006ee2c0) (0xc0004e6e60) Stream removed, broadcasting: 5\n" Feb 15 11:42:45.309: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 11:42:45.309: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 15 11:42:55.404: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 15 11:43:05.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tsc75 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:43:06.484: INFO: stderr: "I0215 11:43:05.857399 1368 log.go:172] (0xc0006be0b0) (0xc0006e4000) Create stream\nI0215 11:43:05.857973 1368 log.go:172] (0xc0006be0b0) (0xc0006e4000) Stream added, broadcasting: 1\nI0215 11:43:05.865774 1368 log.go:172] (0xc0006be0b0) Reply frame received for 1\nI0215 11:43:05.865844 1368 log.go:172] (0xc0006be0b0) (0xc00048ed20) Create stream\nI0215 11:43:05.865862 1368 log.go:172] (0xc0006be0b0) (0xc00048ed20) Stream added, broadcasting: 3\nI0215 11:43:05.867828 1368 log.go:172] (0xc0006be0b0) Reply frame received for 3\nI0215 11:43:05.867910 1368 log.go:172] (0xc0006be0b0) (0xc00012a000) Create stream\nI0215 11:43:05.867925 1368 log.go:172] (0xc0006be0b0) (0xc00012a000) Stream added, broadcasting: 5\nI0215 11:43:05.869100 1368 log.go:172] (0xc0006be0b0) Reply frame received for 5\nI0215 11:43:06.032714 1368 log.go:172] (0xc0006be0b0) Data frame received for 3\nI0215 11:43:06.032835 1368 log.go:172] (0xc00048ed20) (3) Data frame handling\nI0215 11:43:06.032920 1368 log.go:172] (0xc00048ed20) (3) Data frame sent\nI0215 11:43:06.457119 1368 log.go:172] (0xc0006be0b0) (0xc00048ed20) Stream removed, broadcasting: 3\nI0215 11:43:06.457888 1368 log.go:172] (0xc0006be0b0) Data frame received for 1\nI0215 11:43:06.458121 1368 log.go:172] (0xc0006be0b0) (0xc00012a000) Stream removed, broadcasting: 5\nI0215 11:43:06.458190 1368 log.go:172] (0xc0006e4000) (1) Data frame handling\nI0215 11:43:06.458236 1368 log.go:172] (0xc0006e4000) (1) Data frame sent\nI0215 11:43:06.458258 1368 log.go:172] (0xc0006be0b0) (0xc0006e4000) Stream removed, broadcasting: 1\nI0215 11:43:06.458279 1368 log.go:172] (0xc0006be0b0) Go away received\nI0215 11:43:06.459715 1368 log.go:172] (0xc0006be0b0) (0xc0006e4000) Stream removed, broadcasting: 1\nI0215 11:43:06.459742 1368 log.go:172] (0xc0006be0b0) (0xc00048ed20) Stream removed, broadcasting: 3\nI0215 11:43:06.459760 1368 log.go:172] (0xc0006be0b0) (0xc00012a000) Stream removed, broadcasting: 5\n" Feb 15 11:43:06.484: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 11:43:06.484: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 11:43:06.576: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update Feb 15 11:43:06.577: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:43:06.577: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:43:06.577: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:43:16.651: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update Feb 15 11:43:16.651: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:43:16.651: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:43:16.651: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:43:26.645: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update Feb 15 11:43:26.645: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:43:26.645: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:43:36.631: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update Feb 15 11:43:36.631: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:43:36.631: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:43:46.626: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update Feb 15 11:43:46.627: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:43:56.633: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update Feb 15 11:43:56.634: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 11:44:06.619: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update STEP: Rolling back to a previous revision Feb 15 11:44:16.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tsc75 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 11:44:17.431: INFO: stderr: "I0215 11:44:16.817459 1391 log.go:172] (0xc00069c370) (0xc0006ea640) Create stream\nI0215 11:44:16.817846 1391 log.go:172] (0xc00069c370) (0xc0006ea640) Stream added, broadcasting: 1\nI0215 11:44:16.827765 1391 log.go:172] (0xc00069c370) Reply frame received for 1\nI0215 11:44:16.827841 1391 log.go:172] (0xc00069c370) (0xc000392dc0) Create stream\nI0215 11:44:16.827860 1391 log.go:172] (0xc00069c370) (0xc000392dc0) Stream added, broadcasting: 3\nI0215 11:44:16.829175 1391 log.go:172] (0xc00069c370) Reply frame received for 3\nI0215 11:44:16.829243 1391 log.go:172] (0xc00069c370) (0xc00053e000) Create stream\nI0215 11:44:16.829285 1391 log.go:172] (0xc00069c370) (0xc00053e000) Stream added, broadcasting: 5\nI0215 11:44:16.830598 1391 log.go:172] (0xc00069c370) Reply frame received for 5\nI0215 11:44:17.157890 1391 log.go:172] (0xc00069c370) Data frame received for 3\nI0215 11:44:17.157999 1391 log.go:172] (0xc000392dc0) (3) Data frame handling\nI0215 11:44:17.158033 1391 log.go:172] (0xc000392dc0) (3) Data frame sent\nI0215 11:44:17.408450 1391 log.go:172] (0xc00069c370) Data frame received for 1\nI0215 11:44:17.408736 1391 log.go:172] (0xc00069c370) (0xc00053e000) Stream removed, broadcasting: 5\nI0215 11:44:17.408873 1391 log.go:172] (0xc0006ea640) (1) Data frame handling\nI0215 11:44:17.408920 1391 log.go:172] (0xc0006ea640) (1) Data frame sent\nI0215 11:44:17.409101 1391 log.go:172] (0xc00069c370) (0xc000392dc0) Stream removed, broadcasting: 3\nI0215 11:44:17.409317 1391 log.go:172] (0xc00069c370) (0xc0006ea640) Stream removed, broadcasting: 1\nI0215 11:44:17.410168 1391 log.go:172] (0xc00069c370) Go away received\nI0215 11:44:17.410747 1391 log.go:172] (0xc00069c370) (0xc0006ea640) Stream removed, broadcasting: 1\nI0215 11:44:17.410784 1391 log.go:172] (0xc00069c370) (0xc000392dc0) Stream removed, broadcasting: 3\nI0215 11:44:17.410789 1391 log.go:172] (0xc00069c370) (0xc00053e000) Stream removed, broadcasting: 5\n" Feb 15 11:44:17.431: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 11:44:17.431: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 11:44:27.540: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 15 11:44:37.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tsc75 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:44:38.218: INFO: stderr: "I0215 11:44:37.904549 1413 log.go:172] (0xc0007040b0) (0xc0007225a0) Create stream\nI0215 11:44:37.904859 1413 log.go:172] (0xc0007040b0) (0xc0007225a0) Stream added, broadcasting: 1\nI0215 11:44:37.912113 1413 log.go:172] (0xc0007040b0) Reply frame received for 1\nI0215 11:44:37.912160 1413 log.go:172] (0xc0007040b0) (0xc000722640) Create stream\nI0215 11:44:37.912173 1413 log.go:172] (0xc0007040b0) (0xc000722640) Stream added, broadcasting: 3\nI0215 11:44:37.913783 1413 log.go:172] (0xc0007040b0) Reply frame received for 3\nI0215 11:44:37.913806 1413 log.go:172] (0xc0007040b0) (0xc0007dcbe0) Create stream\nI0215 11:44:37.913813 1413 log.go:172] (0xc0007040b0) (0xc0007dcbe0) Stream added, broadcasting: 5\nI0215 11:44:37.914679 1413 log.go:172] (0xc0007040b0) Reply frame received for 5\nI0215 11:44:38.062162 1413 log.go:172] (0xc0007040b0) Data frame received for 3\nI0215 11:44:38.062254 1413 log.go:172] (0xc000722640) (3) Data frame handling\nI0215 11:44:38.062274 1413 log.go:172] (0xc000722640) (3) Data frame sent\nI0215 11:44:38.202856 1413 log.go:172] (0xc0007040b0) Data frame received for 1\nI0215 11:44:38.203016 1413 log.go:172] (0xc0007225a0) (1) Data frame handling\nI0215 11:44:38.203074 1413 log.go:172] (0xc0007225a0) (1) Data frame sent\nI0215 11:44:38.203349 1413 log.go:172] (0xc0007040b0) (0xc0007dcbe0) Stream removed, broadcasting: 5\nI0215 11:44:38.203549 1413 log.go:172] (0xc0007040b0) (0xc000722640) Stream removed, broadcasting: 3\nI0215 11:44:38.203659 1413 log.go:172] (0xc0007040b0) (0xc0007225a0) Stream removed, broadcasting: 1\nI0215 11:44:38.203702 1413 log.go:172] (0xc0007040b0) Go away received\nI0215 11:44:38.204387 1413 log.go:172] (0xc0007040b0) (0xc0007225a0) Stream removed, broadcasting: 1\nI0215 11:44:38.204408 1413 log.go:172] (0xc0007040b0) (0xc000722640) Stream removed, broadcasting: 3\nI0215 11:44:38.204416 1413 log.go:172] (0xc0007040b0) (0xc0007dcbe0) Stream removed, broadcasting: 5\n" Feb 15 11:44:38.219: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 11:44:38.219: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 11:44:48.285: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update Feb 15 11:44:48.286: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 11:44:48.286: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 11:44:48.286: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 11:44:58.415: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update Feb 15 11:44:58.415: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 11:44:58.415: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 11:45:08.343: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update Feb 15 11:45:08.343: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 11:45:08.343: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 11:45:18.694: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update Feb 15 11:45:18.695: INFO: Waiting for Pod e2e-tests-statefulset-tsc75/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 11:45:28.575: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsc75/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 15 11:45:38.324: INFO: Deleting all statefulset in ns e2e-tests-statefulset-tsc75 Feb 15 11:45:38.331: INFO: Scaling statefulset ss2 to 0 Feb 15 11:46:08.397: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 11:46:08.406: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:46:08.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-tsc75" for this suite. Feb 15 11:46:18.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:46:18.746: INFO: namespace: e2e-tests-statefulset-tsc75, resource: bindings, ignored listing per whitelist Feb 15 11:46:18.807: INFO: namespace e2e-tests-statefulset-tsc75 deletion completed in 10.212091326s • [SLOW TEST:244.427 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:46:18.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 15 11:46:18.974: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 15 11:46:18.982: INFO: Waiting for terminating namespaces to be deleted... Feb 15 11:46:18.985: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 15 11:46:19.004: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 15 11:46:19.004: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 15 11:46:19.004: INFO: Container coredns ready: true, restart count 0 Feb 15 11:46:19.004: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 15 11:46:19.004: INFO: Container kube-proxy ready: true, restart count 0 Feb 15 11:46:19.004: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 15 11:46:19.004: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 15 11:46:19.004: INFO: Container weave ready: true, restart count 0 Feb 15 11:46:19.004: INFO: Container weave-npc ready: true, restart count 0 Feb 15 11:46:19.004: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 15 11:46:19.004: INFO: Container coredns ready: true, restart count 0 Feb 15 11:46:19.004: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 15 11:46:19.004: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f390de6b072fe8], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:46:20.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-j2jzt" for this suite. Feb 15 11:46:26.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:46:26.240: INFO: namespace: e2e-tests-sched-pred-j2jzt, resource: bindings, ignored listing per whitelist Feb 15 11:46:26.489: INFO: namespace e2e-tests-sched-pred-j2jzt deletion completed in 6.367435975s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.680 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:46:26.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-cc57d03a-4fe8-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 11:46:26.779: INFO: Waiting up to 5m0s for pod "pod-secrets-cc58c32c-4fe8-11ea-960a-0242ac110007" in namespace "e2e-tests-secrets-ggqhw" to be "success or failure" Feb 15 11:46:26.801: INFO: Pod "pod-secrets-cc58c32c-4fe8-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 21.935333ms Feb 15 11:46:28.844: INFO: Pod "pod-secrets-cc58c32c-4fe8-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064684385s Feb 15 11:46:30.863: INFO: Pod "pod-secrets-cc58c32c-4fe8-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084339496s Feb 15 11:46:32.879: INFO: Pod "pod-secrets-cc58c32c-4fe8-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100528201s Feb 15 11:46:34.916: INFO: Pod "pod-secrets-cc58c32c-4fe8-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.137400223s STEP: Saw pod success Feb 15 11:46:34.917: INFO: Pod "pod-secrets-cc58c32c-4fe8-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:46:34.928: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-cc58c32c-4fe8-11ea-960a-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 15 11:46:35.021: INFO: Waiting for pod pod-secrets-cc58c32c-4fe8-11ea-960a-0242ac110007 to disappear Feb 15 11:46:35.047: INFO: Pod pod-secrets-cc58c32c-4fe8-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:46:35.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ggqhw" for this suite. Feb 15 11:46:41.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:46:41.325: INFO: namespace: e2e-tests-secrets-ggqhw, resource: bindings, ignored listing per whitelist Feb 15 11:46:41.332: INFO: namespace e2e-tests-secrets-ggqhw deletion completed in 6.261225211s • [SLOW TEST:14.842 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:46:41.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 15 11:46:52.117: INFO: Successfully updated pod "labelsupdated51ef8ab-4fe8-11ea-960a-0242ac110007" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:46:54.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nch4t" for this suite. Feb 15 11:47:18.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:47:18.368: INFO: namespace: e2e-tests-downward-api-nch4t, resource: bindings, ignored listing per whitelist Feb 15 11:47:18.652: INFO: namespace e2e-tests-downward-api-nch4t deletion completed in 24.446440139s • [SLOW TEST:37.320 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:47:18.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 15 11:47:18.885: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 15 11:47:18.893: INFO: Waiting for terminating namespaces to be deleted... Feb 15 11:47:18.896: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 15 11:47:18.908: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 15 11:47:18.908: INFO: Container coredns ready: true, restart count 0 Feb 15 11:47:18.908: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 15 11:47:18.908: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 15 11:47:18.908: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 15 11:47:18.908: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 15 11:47:18.908: INFO: Container coredns ready: true, restart count 0 Feb 15 11:47:18.908: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 15 11:47:18.908: INFO: Container kube-proxy ready: true, restart count 0 Feb 15 11:47:18.908: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 15 11:47:18.908: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 15 11:47:18.908: INFO: Container weave ready: true, restart count 0 Feb 15 11:47:18.908: INFO: Container weave-npc ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Feb 15 11:47:18.957: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 15 11:47:18.957: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 15 11:47:18.957: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 15 11:47:18.957: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Feb 15 11:47:18.957: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Feb 15 11:47:18.957: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 15 11:47:18.957: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 15 11:47:18.957: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-eb758977-4fe8-11ea-960a-0242ac110007.15f390ec5fd057bb], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-ppjz5/filler-pod-eb758977-4fe8-11ea-960a-0242ac110007 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-eb758977-4fe8-11ea-960a-0242ac110007.15f390ed89e8a750], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-eb758977-4fe8-11ea-960a-0242ac110007.15f390ee28a8ad78], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-eb758977-4fe8-11ea-960a-0242ac110007.15f390ee61e7f470], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f390eeb6f1b66b], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:47:30.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-ppjz5" for this suite. Feb 15 11:47:38.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:47:39.855: INFO: namespace: e2e-tests-sched-pred-ppjz5, resource: bindings, ignored listing per whitelist Feb 15 11:47:40.058: INFO: namespace e2e-tests-sched-pred-ppjz5 deletion completed in 9.749138613s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:21.402 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:47:40.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f82ada45-4fe8-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 11:47:40.300: INFO: Waiting up to 5m0s for pod "pod-secrets-f82c954f-4fe8-11ea-960a-0242ac110007" in namespace "e2e-tests-secrets-trjf8" to be "success or failure" Feb 15 11:47:40.356: INFO: Pod "pod-secrets-f82c954f-4fe8-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 56.143542ms Feb 15 11:47:42.384: INFO: Pod "pod-secrets-f82c954f-4fe8-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084708791s Feb 15 11:47:44.394: INFO: Pod "pod-secrets-f82c954f-4fe8-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094481239s Feb 15 11:47:47.109: INFO: Pod "pod-secrets-f82c954f-4fe8-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.809499623s Feb 15 11:47:49.149: INFO: Pod "pod-secrets-f82c954f-4fe8-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.849316296s Feb 15 11:47:51.170: INFO: Pod "pod-secrets-f82c954f-4fe8-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.869806337s STEP: Saw pod success Feb 15 11:47:51.170: INFO: Pod "pod-secrets-f82c954f-4fe8-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:47:51.179: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f82c954f-4fe8-11ea-960a-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 15 11:47:51.450: INFO: Waiting for pod pod-secrets-f82c954f-4fe8-11ea-960a-0242ac110007 to disappear Feb 15 11:47:51.473: INFO: Pod pod-secrets-f82c954f-4fe8-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:47:51.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-trjf8" for this suite. Feb 15 11:47:57.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:47:57.716: INFO: namespace: e2e-tests-secrets-trjf8, resource: bindings, ignored listing per whitelist Feb 15 11:47:57.772: INFO: namespace e2e-tests-secrets-trjf8 deletion completed in 6.293092333s • [SLOW TEST:17.713 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:47:57.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 11:47:57.991: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02b695d7-4fe9-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-bzdgl" to be "success or failure" Feb 15 11:47:57.998: INFO: Pod "downwardapi-volume-02b695d7-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.6153ms Feb 15 11:48:00.024: INFO: Pod "downwardapi-volume-02b695d7-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032480658s Feb 15 11:48:02.047: INFO: Pod "downwardapi-volume-02b695d7-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055831232s Feb 15 11:48:04.064: INFO: Pod "downwardapi-volume-02b695d7-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072958274s Feb 15 11:48:06.219: INFO: Pod "downwardapi-volume-02b695d7-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227290979s Feb 15 11:48:08.627: INFO: Pod "downwardapi-volume-02b695d7-4fe9-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.635709073s STEP: Saw pod success Feb 15 11:48:08.628: INFO: Pod "downwardapi-volume-02b695d7-4fe9-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:48:08.648: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-02b695d7-4fe9-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 11:48:08.926: INFO: Waiting for pod downwardapi-volume-02b695d7-4fe9-11ea-960a-0242ac110007 to disappear Feb 15 11:48:08.968: INFO: Pod downwardapi-volume-02b695d7-4fe9-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:48:08.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bzdgl" for this suite. Feb 15 11:48:15.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:48:15.231: INFO: namespace: e2e-tests-downward-api-bzdgl, resource: bindings, ignored listing per whitelist Feb 15 11:48:15.323: INFO: namespace e2e-tests-downward-api-bzdgl deletion completed in 6.338927324s • [SLOW TEST:17.551 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:48:15.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 15 11:48:15.568: INFO: Waiting up to 5m0s for pod "pod-0d250528-4fe9-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-5qnsw" to be "success or failure" Feb 15 11:48:15.594: INFO: Pod "pod-0d250528-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 26.470313ms Feb 15 11:48:19.374: INFO: Pod "pod-0d250528-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.805639008s Feb 15 11:48:21.403: INFO: Pod "pod-0d250528-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.834781364s Feb 15 11:48:23.422: INFO: Pod "pod-0d250528-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.854432536s Feb 15 11:48:25.452: INFO: Pod "pod-0d250528-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.884270458s Feb 15 11:48:27.467: INFO: Pod "pod-0d250528-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.899179466s Feb 15 11:48:29.804: INFO: Pod "pod-0d250528-4fe9-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.235838803s STEP: Saw pod success Feb 15 11:48:29.804: INFO: Pod "pod-0d250528-4fe9-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:48:29.814: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0d250528-4fe9-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 11:48:29.967: INFO: Waiting for pod pod-0d250528-4fe9-11ea-960a-0242ac110007 to disappear Feb 15 11:48:29.983: INFO: Pod pod-0d250528-4fe9-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:48:29.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5qnsw" for this suite. Feb 15 11:48:36.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:48:36.146: INFO: namespace: e2e-tests-emptydir-5qnsw, resource: bindings, ignored listing per whitelist Feb 15 11:48:36.296: INFO: namespace e2e-tests-emptydir-5qnsw deletion completed in 6.300723407s • [SLOW TEST:20.973 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:48:36.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Feb 15 11:48:36.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4bxxj' Feb 15 11:48:38.811: INFO: stderr: "" Feb 15 11:48:38.811: INFO: stdout: "pod/pause created\n" Feb 15 11:48:38.811: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 15 11:48:38.812: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-4bxxj" to be "running and ready" Feb 15 11:48:38.866: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 54.66874ms Feb 15 11:48:40.893: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081015292s Feb 15 11:48:42.910: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098493266s Feb 15 11:48:45.385: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.573098376s Feb 15 11:48:47.411: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.599757581s Feb 15 11:48:49.429: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.617336326s Feb 15 11:48:49.429: INFO: Pod "pause" satisfied condition "running and ready" Feb 15 11:48:49.429: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Feb 15 11:48:49.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-4bxxj' Feb 15 11:48:49.693: INFO: stderr: "" Feb 15 11:48:49.693: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 15 11:48:49.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-4bxxj' Feb 15 11:48:49.840: INFO: stderr: "" Feb 15 11:48:49.840: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 15 11:48:49.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-4bxxj' Feb 15 11:48:50.039: INFO: stderr: "" Feb 15 11:48:50.040: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 15 11:48:50.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-4bxxj' Feb 15 11:48:50.220: INFO: stderr: "" Feb 15 11:48:50.220: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 12s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Feb 15 11:48:50.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4bxxj' Feb 15 11:48:50.558: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 11:48:50.559: INFO: stdout: "pod \"pause\" force deleted\n" Feb 15 11:48:50.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-4bxxj' Feb 15 11:48:50.822: INFO: stderr: "No resources found.\n" Feb 15 11:48:50.822: INFO: stdout: "" Feb 15 11:48:50.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-4bxxj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 15 11:48:51.240: INFO: stderr: "" Feb 15 11:48:51.240: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:48:51.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4bxxj" for this suite. Feb 15 11:48:57.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:48:57.675: INFO: namespace: e2e-tests-kubectl-4bxxj, resource: bindings, ignored listing per whitelist Feb 15 11:48:57.691: INFO: namespace e2e-tests-kubectl-4bxxj deletion completed in 6.331902545s • [SLOW TEST:21.394 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:48:57.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 11:48:57.880: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26698ad8-4fe9-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-tp5zx" to be "success or failure" Feb 15 11:48:57.894: INFO: Pod "downwardapi-volume-26698ad8-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.516717ms Feb 15 11:49:00.111: INFO: Pod "downwardapi-volume-26698ad8-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230072134s Feb 15 11:49:02.164: INFO: Pod "downwardapi-volume-26698ad8-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283735128s Feb 15 11:49:04.629: INFO: Pod "downwardapi-volume-26698ad8-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.748746748s Feb 15 11:49:06.646: INFO: Pod "downwardapi-volume-26698ad8-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.765482101s Feb 15 11:49:08.725: INFO: Pod "downwardapi-volume-26698ad8-4fe9-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.844833857s STEP: Saw pod success Feb 15 11:49:08.726: INFO: Pod "downwardapi-volume-26698ad8-4fe9-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:49:08.756: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-26698ad8-4fe9-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 11:49:08.919: INFO: Waiting for pod downwardapi-volume-26698ad8-4fe9-11ea-960a-0242ac110007 to disappear Feb 15 11:49:08.931: INFO: Pod downwardapi-volume-26698ad8-4fe9-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:49:08.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tp5zx" for this suite. Feb 15 11:49:14.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:49:14.999: INFO: namespace: e2e-tests-projected-tp5zx, resource: bindings, ignored listing per whitelist Feb 15 11:49:15.186: INFO: namespace e2e-tests-projected-tp5zx deletion completed in 6.24217411s • [SLOW TEST:17.495 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:49:15.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Feb 15 11:49:15.387: INFO: Waiting up to 5m0s for pod "var-expansion-30d85753-4fe9-11ea-960a-0242ac110007" in namespace "e2e-tests-var-expansion-gzmmv" to be "success or failure" Feb 15 11:49:15.395: INFO: Pod "var-expansion-30d85753-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.467471ms Feb 15 11:49:17.414: INFO: Pod "var-expansion-30d85753-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026798862s Feb 15 11:49:19.433: INFO: Pod "var-expansion-30d85753-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046126325s Feb 15 11:49:21.929: INFO: Pod "var-expansion-30d85753-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.541876161s Feb 15 11:49:24.208: INFO: Pod "var-expansion-30d85753-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.820957428s Feb 15 11:49:26.228: INFO: Pod "var-expansion-30d85753-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.840573223s Feb 15 11:49:28.250: INFO: Pod "var-expansion-30d85753-4fe9-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.863113665s STEP: Saw pod success Feb 15 11:49:28.251: INFO: Pod "var-expansion-30d85753-4fe9-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:49:28.265: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-30d85753-4fe9-11ea-960a-0242ac110007 container dapi-container: STEP: delete the pod Feb 15 11:49:28.621: INFO: Waiting for pod var-expansion-30d85753-4fe9-11ea-960a-0242ac110007 to disappear Feb 15 11:49:28.670: INFO: Pod var-expansion-30d85753-4fe9-11ea-960a-0242ac110007 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:49:28.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-gzmmv" for this suite. Feb 15 11:49:34.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:49:35.068: INFO: namespace: e2e-tests-var-expansion-gzmmv, resource: bindings, ignored listing per whitelist Feb 15 11:49:35.071: INFO: namespace e2e-tests-var-expansion-gzmmv deletion completed in 6.343204217s • [SLOW TEST:19.883 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:49:35.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-3cb23413-4fe9-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 11:49:35.280: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3cb32362-4fe9-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-k52p4" to be "success or failure" Feb 15 11:49:35.359: INFO: Pod "pod-projected-secrets-3cb32362-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 79.12741ms Feb 15 11:49:37.376: INFO: Pod "pod-projected-secrets-3cb32362-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096089974s Feb 15 11:49:39.389: INFO: Pod "pod-projected-secrets-3cb32362-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109392207s Feb 15 11:49:41.612: INFO: Pod "pod-projected-secrets-3cb32362-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.33236054s Feb 15 11:49:43.660: INFO: Pod "pod-projected-secrets-3cb32362-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.380076772s Feb 15 11:49:45.668: INFO: Pod "pod-projected-secrets-3cb32362-4fe9-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.388303027s STEP: Saw pod success Feb 15 11:49:45.668: INFO: Pod "pod-projected-secrets-3cb32362-4fe9-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:49:45.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-3cb32362-4fe9-11ea-960a-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Feb 15 11:49:46.390: INFO: Waiting for pod pod-projected-secrets-3cb32362-4fe9-11ea-960a-0242ac110007 to disappear Feb 15 11:49:46.440: INFO: Pod pod-projected-secrets-3cb32362-4fe9-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:49:46.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k52p4" for this suite. Feb 15 11:49:54.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:49:54.902: INFO: namespace: e2e-tests-projected-k52p4, resource: bindings, ignored listing per whitelist Feb 15 11:49:54.915: INFO: namespace e2e-tests-projected-k52p4 deletion completed in 8.465194813s • [SLOW TEST:19.844 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:49:54.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 15 11:49:55.076: INFO: Waiting up to 5m0s for pod "pod-48818f41-4fe9-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-6cxvv" to be "success or failure" Feb 15 11:49:55.219: INFO: Pod "pod-48818f41-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 142.882482ms Feb 15 11:49:57.241: INFO: Pod "pod-48818f41-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164825563s Feb 15 11:49:59.835: INFO: Pod "pod-48818f41-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.75887199s Feb 15 11:50:01.854: INFO: Pod "pod-48818f41-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.778052134s Feb 15 11:50:03.922: INFO: Pod "pod-48818f41-4fe9-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.846400606s STEP: Saw pod success Feb 15 11:50:03.923: INFO: Pod "pod-48818f41-4fe9-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:50:04.296: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-48818f41-4fe9-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 11:50:04.789: INFO: Waiting for pod pod-48818f41-4fe9-11ea-960a-0242ac110007 to disappear Feb 15 11:50:04.834: INFO: Pod pod-48818f41-4fe9-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:50:04.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6cxvv" for this suite. Feb 15 11:50:11.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:50:11.245: INFO: namespace: e2e-tests-emptydir-6cxvv, resource: bindings, ignored listing per whitelist Feb 15 11:50:11.470: INFO: namespace e2e-tests-emptydir-6cxvv deletion completed in 6.464912828s • [SLOW TEST:16.555 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:50:11.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Feb 15 11:50:11.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-6lxvv run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 15 11:50:21.046: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0215 11:50:19.215786 1610 log.go:172] (0xc0007da160) (0xc000980140) Create stream\nI0215 11:50:19.216207 1610 log.go:172] (0xc0007da160) (0xc000980140) Stream added, broadcasting: 1\nI0215 11:50:19.227235 1610 log.go:172] (0xc0007da160) Reply frame received for 1\nI0215 11:50:19.227276 1610 log.go:172] (0xc0007da160) (0xc0009801e0) Create stream\nI0215 11:50:19.227285 1610 log.go:172] (0xc0007da160) (0xc0009801e0) Stream added, broadcasting: 3\nI0215 11:50:19.229014 1610 log.go:172] (0xc0007da160) Reply frame received for 3\nI0215 11:50:19.229075 1610 log.go:172] (0xc0007da160) (0xc00035a1e0) Create stream\nI0215 11:50:19.229089 1610 log.go:172] (0xc0007da160) (0xc00035a1e0) Stream added, broadcasting: 5\nI0215 11:50:19.230748 1610 log.go:172] (0xc0007da160) Reply frame received for 5\nI0215 11:50:19.230791 1610 log.go:172] (0xc0007da160) (0xc000648f00) Create stream\nI0215 11:50:19.230806 1610 log.go:172] (0xc0007da160) (0xc000648f00) Stream added, broadcasting: 7\nI0215 11:50:19.232636 1610 log.go:172] (0xc0007da160) Reply frame received for 7\nI0215 11:50:19.232932 1610 log.go:172] (0xc0009801e0) (3) Writing data frame\nI0215 11:50:19.233109 1610 log.go:172] (0xc0009801e0) (3) Writing data frame\nI0215 11:50:19.247330 1610 log.go:172] (0xc0007da160) Data frame received for 5\nI0215 11:50:19.247362 1610 log.go:172] (0xc00035a1e0) (5) Data frame handling\nI0215 11:50:19.247392 1610 log.go:172] (0xc00035a1e0) (5) Data frame sent\nI0215 11:50:19.250150 1610 log.go:172] (0xc0007da160) Data frame received for 5\nI0215 11:50:19.250172 1610 log.go:172] (0xc00035a1e0) (5) Data frame handling\nI0215 11:50:19.250207 1610 log.go:172] (0xc00035a1e0) (5) Data frame sent\nI0215 11:50:20.984033 1610 log.go:172] (0xc0007da160) Data frame received for 1\nI0215 11:50:20.984169 1610 log.go:172] (0xc000980140) (1) Data frame handling\nI0215 11:50:20.984223 1610 log.go:172] (0xc000980140) (1) Data frame sent\nI0215 11:50:20.984263 1610 log.go:172] (0xc0007da160) (0xc000980140) Stream removed, broadcasting: 1\nI0215 11:50:20.984962 1610 log.go:172] (0xc0007da160) (0xc0009801e0) Stream removed, broadcasting: 3\nI0215 11:50:20.985664 1610 log.go:172] (0xc0007da160) (0xc00035a1e0) Stream removed, broadcasting: 5\nI0215 11:50:20.986174 1610 log.go:172] (0xc0007da160) (0xc000648f00) Stream removed, broadcasting: 7\nI0215 11:50:20.986254 1610 log.go:172] (0xc0007da160) (0xc000980140) Stream removed, broadcasting: 1\nI0215 11:50:20.986274 1610 log.go:172] (0xc0007da160) (0xc0009801e0) Stream removed, broadcasting: 3\nI0215 11:50:20.986287 1610 log.go:172] (0xc0007da160) (0xc00035a1e0) Stream removed, broadcasting: 5\nI0215 11:50:20.986307 1610 log.go:172] (0xc0007da160) (0xc000648f00) Stream removed, broadcasting: 7\nI0215 11:50:20.986626 1610 log.go:172] (0xc0007da160) Go away received\n" Feb 15 11:50:21.047: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:50:23.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6lxvv" for this suite. Feb 15 11:50:35.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:50:35.253: INFO: namespace: e2e-tests-kubectl-6lxvv, resource: bindings, ignored listing per whitelist Feb 15 11:50:35.327: INFO: namespace e2e-tests-kubectl-6lxvv deletion completed in 12.253366357s • [SLOW TEST:23.856 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:50:35.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:50:45.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-9d76d" for this suite. Feb 15 11:51:35.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:51:35.896: INFO: namespace: e2e-tests-kubelet-test-9d76d, resource: bindings, ignored listing per whitelist Feb 15 11:51:36.029: INFO: namespace e2e-tests-kubelet-test-9d76d deletion completed in 50.33000069s • [SLOW TEST:60.702 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:51:36.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 15 11:51:46.390: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-84d8185e-4fe9-11ea-960a-0242ac110007,GenerateName:,Namespace:e2e-tests-events-29kms,SelfLink:/api/v1/namespaces/e2e-tests-events-29kms/pods/send-events-84d8185e-4fe9-11ea-960a-0242ac110007,UID:84d92d9a-4fe9-11ea-a994-fa163e34d433,ResourceVersion:21752465,Generation:0,CreationTimestamp:2020-02-15 11:51:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 295062941,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-879pn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-879pn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-879pn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020dbd10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020dbd30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:51:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:51:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:51:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:51:36 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-15 11:51:36 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-15 11:51:44 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://f15f64a714fd96290b0405613903b7ad7f64b6389093769180d3370b9a902779}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 15 11:51:48.407: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 15 11:51:50.423: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:51:50.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-29kms" for this suite. Feb 15 11:52:34.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:52:34.682: INFO: namespace: e2e-tests-events-29kms, resource: bindings, ignored listing per whitelist Feb 15 11:52:34.794: INFO: namespace e2e-tests-events-29kms deletion completed in 44.334851164s • [SLOW TEST:58.764 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:52:34.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 11:52:34.972: INFO: Creating ReplicaSet my-hostname-basic-a7d17d25-4fe9-11ea-960a-0242ac110007 Feb 15 11:52:35.064: INFO: Pod name my-hostname-basic-a7d17d25-4fe9-11ea-960a-0242ac110007: Found 0 pods out of 1 Feb 15 11:52:40.911: INFO: Pod name my-hostname-basic-a7d17d25-4fe9-11ea-960a-0242ac110007: Found 1 pods out of 1 Feb 15 11:52:40.911: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a7d17d25-4fe9-11ea-960a-0242ac110007" is running Feb 15 11:52:45.535: INFO: Pod "my-hostname-basic-a7d17d25-4fe9-11ea-960a-0242ac110007-nhbc7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 11:52:35 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 11:52:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a7d17d25-4fe9-11ea-960a-0242ac110007]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 11:52:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a7d17d25-4fe9-11ea-960a-0242ac110007]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 11:52:35 +0000 UTC Reason: Message:}]) Feb 15 11:52:45.536: INFO: Trying to dial the pod Feb 15 11:52:50.690: INFO: Controller my-hostname-basic-a7d17d25-4fe9-11ea-960a-0242ac110007: Got expected result from replica 1 [my-hostname-basic-a7d17d25-4fe9-11ea-960a-0242ac110007-nhbc7]: "my-hostname-basic-a7d17d25-4fe9-11ea-960a-0242ac110007-nhbc7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:52:50.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-ddkgn" for this suite. Feb 15 11:52:56.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:52:56.900: INFO: namespace: e2e-tests-replicaset-ddkgn, resource: bindings, ignored listing per whitelist Feb 15 11:52:56.948: INFO: namespace e2e-tests-replicaset-ddkgn deletion completed in 6.242352501s • [SLOW TEST:22.154 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:52:56.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 15 11:52:57.268: INFO: Waiting up to 5m0s for pod "pod-b5198446-4fe9-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-hnfsj" to be "success or failure" Feb 15 11:52:57.274: INFO: Pod "pod-b5198446-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323099ms Feb 15 11:52:59.593: INFO: Pod "pod-b5198446-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324554955s Feb 15 11:53:01.616: INFO: Pod "pod-b5198446-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347716959s Feb 15 11:53:03.907: INFO: Pod "pod-b5198446-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.638544065s Feb 15 11:53:05.928: INFO: Pod "pod-b5198446-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.659637899s Feb 15 11:53:07.949: INFO: Pod "pod-b5198446-4fe9-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.680490989s STEP: Saw pod success Feb 15 11:53:07.949: INFO: Pod "pod-b5198446-4fe9-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:53:07.990: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b5198446-4fe9-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 11:53:08.194: INFO: Waiting for pod pod-b5198446-4fe9-11ea-960a-0242ac110007 to disappear Feb 15 11:53:08.224: INFO: Pod pod-b5198446-4fe9-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:53:08.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hnfsj" for this suite. Feb 15 11:53:14.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:53:14.596: INFO: namespace: e2e-tests-emptydir-hnfsj, resource: bindings, ignored listing per whitelist Feb 15 11:53:14.647: INFO: namespace e2e-tests-emptydir-hnfsj deletion completed in 6.234579654s • [SLOW TEST:17.698 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:53:14.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-fflqc [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-fflqc STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-fflqc STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-fflqc STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-fflqc Feb 15 11:53:26.920: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-fflqc, name: ss-0, uid: c625e02b-4fe9-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Feb 15 11:53:27.081: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-fflqc, name: ss-0, uid: c625e02b-4fe9-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 15 11:53:27.198: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-fflqc, name: ss-0, uid: c625e02b-4fe9-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 15 11:53:27.210: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-fflqc STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-fflqc STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-fflqc and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 15 11:53:40.643: INFO: Deleting all statefulset in ns e2e-tests-statefulset-fflqc Feb 15 11:53:40.653: INFO: Scaling statefulset ss to 0 Feb 15 11:54:00.837: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 11:54:00.846: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:54:00.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-fflqc" for this suite. Feb 15 11:54:09.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:54:09.234: INFO: namespace: e2e-tests-statefulset-fflqc, resource: bindings, ignored listing per whitelist Feb 15 11:54:09.242: INFO: namespace e2e-tests-statefulset-fflqc deletion completed in 8.273690044s • [SLOW TEST:54.595 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:54:09.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 15 11:54:09.458: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kxp2h,SelfLink:/api/v1/namespaces/e2e-tests-watch-kxp2h/configmaps/e2e-watch-test-watch-closed,UID:e020891e-4fe9-11ea-a994-fa163e34d433,ResourceVersion:21752866,Generation:0,CreationTimestamp:2020-02-15 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 15 11:54:09.458: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kxp2h,SelfLink:/api/v1/namespaces/e2e-tests-watch-kxp2h/configmaps/e2e-watch-test-watch-closed,UID:e020891e-4fe9-11ea-a994-fa163e34d433,ResourceVersion:21752867,Generation:0,CreationTimestamp:2020-02-15 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 15 11:54:09.505: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kxp2h,SelfLink:/api/v1/namespaces/e2e-tests-watch-kxp2h/configmaps/e2e-watch-test-watch-closed,UID:e020891e-4fe9-11ea-a994-fa163e34d433,ResourceVersion:21752868,Generation:0,CreationTimestamp:2020-02-15 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 15 11:54:09.505: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kxp2h,SelfLink:/api/v1/namespaces/e2e-tests-watch-kxp2h/configmaps/e2e-watch-test-watch-closed,UID:e020891e-4fe9-11ea-a994-fa163e34d433,ResourceVersion:21752869,Generation:0,CreationTimestamp:2020-02-15 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:54:09.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-kxp2h" for this suite. Feb 15 11:54:15.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:54:15.828: INFO: namespace: e2e-tests-watch-kxp2h, resource: bindings, ignored listing per whitelist Feb 15 11:54:15.942: INFO: namespace e2e-tests-watch-kxp2h deletion completed in 6.427922958s • [SLOW TEST:6.699 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:54:15.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Feb 15 11:54:24.942: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-e47f4e2c-4fe9-11ea-960a-0242ac110007", GenerateName:"", Namespace:"e2e-tests-pods-wgp6m", SelfLink:"/api/v1/namespaces/e2e-tests-pods-wgp6m/pods/pod-submit-remove-e47f4e2c-4fe9-11ea-960a-0242ac110007", UID:"e481f586-4fe9-11ea-a994-fa163e34d433", ResourceVersion:"21752904", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717364456, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"774434178"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4v92b", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002575040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4v92b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002165568), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021a1680), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0021659b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0021659d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0021659d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0021659dc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717364456, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717364464, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717364464, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717364456, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002594500), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002594520), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://5a34afef250603b09afa06b151247bdb96213f5b6732818aae6a0bc2ec3ac9da"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:54:42.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wgp6m" for this suite. Feb 15 11:54:48.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:54:48.768: INFO: namespace: e2e-tests-pods-wgp6m, resource: bindings, ignored listing per whitelist Feb 15 11:54:48.911: INFO: namespace e2e-tests-pods-wgp6m deletion completed in 6.243541959s • [SLOW TEST:32.969 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:54:48.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Feb 15 11:54:49.113: INFO: Waiting up to 5m0s for pod "client-containers-f7c3a6e6-4fe9-11ea-960a-0242ac110007" in namespace "e2e-tests-containers-mvswd" to be "success or failure" Feb 15 11:54:49.125: INFO: Pod "client-containers-f7c3a6e6-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.208392ms Feb 15 11:54:51.144: INFO: Pod "client-containers-f7c3a6e6-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030981723s Feb 15 11:54:53.155: INFO: Pod "client-containers-f7c3a6e6-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042038774s Feb 15 11:54:55.172: INFO: Pod "client-containers-f7c3a6e6-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059334717s Feb 15 11:54:57.182: INFO: Pod "client-containers-f7c3a6e6-4fe9-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069553306s Feb 15 11:54:59.236: INFO: Pod "client-containers-f7c3a6e6-4fe9-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122929694s STEP: Saw pod success Feb 15 11:54:59.236: INFO: Pod "client-containers-f7c3a6e6-4fe9-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:54:59.259: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-f7c3a6e6-4fe9-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 11:54:59.427: INFO: Waiting for pod client-containers-f7c3a6e6-4fe9-11ea-960a-0242ac110007 to disappear Feb 15 11:54:59.437: INFO: Pod client-containers-f7c3a6e6-4fe9-11ea-960a-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:54:59.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-mvswd" for this suite. Feb 15 11:55:05.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:55:05.807: INFO: namespace: e2e-tests-containers-mvswd, resource: bindings, ignored listing per whitelist Feb 15 11:55:05.829: INFO: namespace e2e-tests-containers-mvswd deletion completed in 6.376966091s • [SLOW TEST:16.916 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:55:05.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 11:55:05.988: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:55:07.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-nfrtp" for this suite. Feb 15 11:55:13.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:55:13.533: INFO: namespace: e2e-tests-custom-resource-definition-nfrtp, resource: bindings, ignored listing per whitelist Feb 15 11:55:13.580: INFO: namespace e2e-tests-custom-resource-definition-nfrtp deletion completed in 6.230511984s • [SLOW TEST:7.751 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:55:13.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 15 11:55:13.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:14.439: INFO: stderr: "" Feb 15 11:55:14.439: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 15 11:55:14.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:14.695: INFO: stderr: "" Feb 15 11:55:14.695: INFO: stdout: "update-demo-nautilus-bvmhm update-demo-nautilus-dznlh " Feb 15 11:55:14.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvmhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:14.808: INFO: stderr: "" Feb 15 11:55:14.808: INFO: stdout: "" Feb 15 11:55:14.808: INFO: update-demo-nautilus-bvmhm is created but not running Feb 15 11:55:19.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:19.987: INFO: stderr: "" Feb 15 11:55:19.987: INFO: stdout: "update-demo-nautilus-bvmhm update-demo-nautilus-dznlh " Feb 15 11:55:19.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvmhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:20.246: INFO: stderr: "" Feb 15 11:55:20.246: INFO: stdout: "" Feb 15 11:55:20.246: INFO: update-demo-nautilus-bvmhm is created but not running Feb 15 11:55:25.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:25.496: INFO: stderr: "" Feb 15 11:55:25.496: INFO: stdout: "update-demo-nautilus-bvmhm update-demo-nautilus-dznlh " Feb 15 11:55:25.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvmhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:25.672: INFO: stderr: "" Feb 15 11:55:25.673: INFO: stdout: "true" Feb 15 11:55:25.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvmhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:25.832: INFO: stderr: "" Feb 15 11:55:25.833: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 11:55:25.833: INFO: validating pod update-demo-nautilus-bvmhm Feb 15 11:55:25.897: INFO: got data: { "image": "nautilus.jpg" } Feb 15 11:55:25.897: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 11:55:25.897: INFO: update-demo-nautilus-bvmhm is verified up and running Feb 15 11:55:25.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dznlh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:26.075: INFO: stderr: "" Feb 15 11:55:26.075: INFO: stdout: "" Feb 15 11:55:26.075: INFO: update-demo-nautilus-dznlh is created but not running Feb 15 11:55:31.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:31.317: INFO: stderr: "" Feb 15 11:55:31.317: INFO: stdout: "update-demo-nautilus-bvmhm update-demo-nautilus-dznlh " Feb 15 11:55:31.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvmhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:31.481: INFO: stderr: "" Feb 15 11:55:31.482: INFO: stdout: "true" Feb 15 11:55:31.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvmhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:31.621: INFO: stderr: "" Feb 15 11:55:31.621: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 11:55:31.621: INFO: validating pod update-demo-nautilus-bvmhm Feb 15 11:55:31.631: INFO: got data: { "image": "nautilus.jpg" } Feb 15 11:55:31.631: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 11:55:31.631: INFO: update-demo-nautilus-bvmhm is verified up and running Feb 15 11:55:31.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dznlh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:31.766: INFO: stderr: "" Feb 15 11:55:31.766: INFO: stdout: "true" Feb 15 11:55:31.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dznlh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:31.947: INFO: stderr: "" Feb 15 11:55:31.947: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 11:55:31.947: INFO: validating pod update-demo-nautilus-dznlh Feb 15 11:55:31.956: INFO: got data: { "image": "nautilus.jpg" } Feb 15 11:55:31.956: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 11:55:31.957: INFO: update-demo-nautilus-dznlh is verified up and running STEP: scaling down the replication controller Feb 15 11:55:31.961: INFO: scanned /root for discovery docs: Feb 15 11:55:31.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:33.473: INFO: stderr: "" Feb 15 11:55:33.474: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 15 11:55:33.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:33.892: INFO: stderr: "" Feb 15 11:55:33.893: INFO: stdout: "update-demo-nautilus-bvmhm update-demo-nautilus-dznlh " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 15 11:55:38.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:39.038: INFO: stderr: "" Feb 15 11:55:39.038: INFO: stdout: "update-demo-nautilus-bvmhm update-demo-nautilus-dznlh " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 15 11:55:44.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:44.157: INFO: stderr: "" Feb 15 11:55:44.157: INFO: stdout: "update-demo-nautilus-bvmhm " Feb 15 11:55:44.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvmhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:44.276: INFO: stderr: "" Feb 15 11:55:44.276: INFO: stdout: "true" Feb 15 11:55:44.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvmhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:44.396: INFO: stderr: "" Feb 15 11:55:44.396: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 11:55:44.396: INFO: validating pod update-demo-nautilus-bvmhm Feb 15 11:55:44.404: INFO: got data: { "image": "nautilus.jpg" } Feb 15 11:55:44.404: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 11:55:44.404: INFO: update-demo-nautilus-bvmhm is verified up and running STEP: scaling up the replication controller Feb 15 11:55:44.408: INFO: scanned /root for discovery docs: Feb 15 11:55:44.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:46.226: INFO: stderr: "" Feb 15 11:55:46.226: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 15 11:55:46.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:46.430: INFO: stderr: "" Feb 15 11:55:46.430: INFO: stdout: "update-demo-nautilus-6pvzv update-demo-nautilus-bvmhm " Feb 15 11:55:46.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pvzv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:46.605: INFO: stderr: "" Feb 15 11:55:46.606: INFO: stdout: "" Feb 15 11:55:46.606: INFO: update-demo-nautilus-6pvzv is created but not running Feb 15 11:55:51.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:52.144: INFO: stderr: "" Feb 15 11:55:52.145: INFO: stdout: "update-demo-nautilus-6pvzv update-demo-nautilus-bvmhm " Feb 15 11:55:52.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pvzv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:52.549: INFO: stderr: "" Feb 15 11:55:52.550: INFO: stdout: "" Feb 15 11:55:52.550: INFO: update-demo-nautilus-6pvzv is created but not running Feb 15 11:55:57.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:58.264: INFO: stderr: "" Feb 15 11:55:58.265: INFO: stdout: "update-demo-nautilus-6pvzv update-demo-nautilus-bvmhm " Feb 15 11:55:58.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pvzv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:58.404: INFO: stderr: "" Feb 15 11:55:58.404: INFO: stdout: "true" Feb 15 11:55:58.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pvzv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:58.630: INFO: stderr: "" Feb 15 11:55:58.630: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 11:55:58.630: INFO: validating pod update-demo-nautilus-6pvzv Feb 15 11:55:58.684: INFO: got data: { "image": "nautilus.jpg" } Feb 15 11:55:58.684: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 11:55:58.684: INFO: update-demo-nautilus-6pvzv is verified up and running Feb 15 11:55:58.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvmhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:58.846: INFO: stderr: "" Feb 15 11:55:58.847: INFO: stdout: "true" Feb 15 11:55:58.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvmhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:59.036: INFO: stderr: "" Feb 15 11:55:59.036: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 11:55:59.036: INFO: validating pod update-demo-nautilus-bvmhm Feb 15 11:55:59.047: INFO: got data: { "image": "nautilus.jpg" } Feb 15 11:55:59.047: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 11:55:59.047: INFO: update-demo-nautilus-bvmhm is verified up and running STEP: using delete to clean up resources Feb 15 11:55:59.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:59.184: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 11:55:59.185: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 15 11:55:59.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-2sbp6' Feb 15 11:55:59.322: INFO: stderr: "No resources found.\n" Feb 15 11:55:59.322: INFO: stdout: "" Feb 15 11:55:59.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-2sbp6 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 15 11:55:59.453: INFO: stderr: "" Feb 15 11:55:59.453: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:55:59.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2sbp6" for this suite. Feb 15 11:56:23.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:56:23.660: INFO: namespace: e2e-tests-kubectl-2sbp6, resource: bindings, ignored listing per whitelist Feb 15 11:56:23.696: INFO: namespace e2e-tests-kubectl-2sbp6 deletion completed in 24.227578848s • [SLOW TEST:70.115 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:56:23.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2tc47 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-2tc47;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2tc47 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-2tc47;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2tc47.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-2tc47.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2tc47.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-2tc47.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2tc47.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2tc47.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-2tc47.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 102.78.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.78.102_udp@PTR;check="$$(dig +tcp +noall +answer +search 102.78.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.78.102_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2tc47 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-2tc47;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2tc47 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-2tc47;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2tc47.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-2tc47.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2tc47.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-2tc47.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2tc47.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2tc47.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-2tc47.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 102.78.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.78.102_udp@PTR;check="$$(dig +tcp +noall +answer +search 102.78.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.78.102_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 15 11:56:40.664: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-2tc47.svc from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.668: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.672: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.678: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.682: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.690: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.694: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.701: INFO: Unable to read 10.106.78.102_udp@PTR from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.705: INFO: Unable to read 10.106.78.102_tcp@PTR from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.712: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.716: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.725: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-2tc47 from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.732: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-2tc47 from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.741: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-2tc47.svc from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.748: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-2tc47.svc from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.752: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.756: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.759: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.764: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.767: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.774: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.786: INFO: Unable to read 10.106.78.102_udp@PTR from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.790: INFO: Unable to read 10.106.78.102_tcp@PTR from pod e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-309da850-4fea-11ea-960a-0242ac110007) Feb 15 11:56:40.790: INFO: Lookups using e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-2tc47.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.106.78.102_udp@PTR 10.106.78.102_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-2tc47 jessie_tcp@dns-test-service.e2e-tests-dns-2tc47 jessie_udp@dns-test-service.e2e-tests-dns-2tc47.svc jessie_tcp@dns-test-service.e2e-tests-dns-2tc47.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2tc47.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2tc47.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.106.78.102_udp@PTR 10.106.78.102_tcp@PTR] Feb 15 11:56:46.284: INFO: DNS probes using e2e-tests-dns-2tc47/dns-test-309da850-4fea-11ea-960a-0242ac110007 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:56:46.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-2tc47" for this suite. Feb 15 11:56:54.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:56:55.083: INFO: namespace: e2e-tests-dns-2tc47, resource: bindings, ignored listing per whitelist Feb 15 11:56:55.162: INFO: namespace e2e-tests-dns-2tc47 deletion completed in 8.274300649s • [SLOW TEST:31.465 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:56:55.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-4323b233-4fea-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 11:56:55.593: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-432597e9-4fea-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-hqfs6" to be "success or failure" Feb 15 11:56:55.606: INFO: Pod "pod-projected-secrets-432597e9-4fea-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.520975ms Feb 15 11:56:57.623: INFO: Pod "pod-projected-secrets-432597e9-4fea-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02959716s Feb 15 11:56:59.636: INFO: Pod "pod-projected-secrets-432597e9-4fea-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042146163s Feb 15 11:57:02.027: INFO: Pod "pod-projected-secrets-432597e9-4fea-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433162354s Feb 15 11:57:04.073: INFO: Pod "pod-projected-secrets-432597e9-4fea-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.479295088s Feb 15 11:57:06.083: INFO: Pod "pod-projected-secrets-432597e9-4fea-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.489832472s STEP: Saw pod success Feb 15 11:57:06.084: INFO: Pod "pod-projected-secrets-432597e9-4fea-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 11:57:06.093: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-432597e9-4fea-11ea-960a-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Feb 15 11:57:06.360: INFO: Waiting for pod pod-projected-secrets-432597e9-4fea-11ea-960a-0242ac110007 to disappear Feb 15 11:57:06.386: INFO: Pod pod-projected-secrets-432597e9-4fea-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:57:06.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hqfs6" for this suite. Feb 15 11:57:13.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:57:13.434: INFO: namespace: e2e-tests-projected-hqfs6, resource: bindings, ignored listing per whitelist Feb 15 11:57:13.488: INFO: namespace e2e-tests-projected-hqfs6 deletion completed in 7.09365404s • [SLOW TEST:18.326 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:57:13.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 11:57:13.874: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Feb 15 11:57:13.899: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lffg9/daemonsets","resourceVersion":"21753321"},"items":null} Feb 15 11:57:13.909: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lffg9/pods","resourceVersion":"21753321"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:57:13.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-lffg9" for this suite. Feb 15 11:57:20.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:57:20.213: INFO: namespace: e2e-tests-daemonsets-lffg9, resource: bindings, ignored listing per whitelist Feb 15 11:57:20.320: INFO: namespace e2e-tests-daemonsets-lffg9 deletion completed in 6.220135017s S [SKIPPING] [6.831 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 11:57:13.874: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:57:20.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 15 11:57:20.637: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wssdg,SelfLink:/api/v1/namespaces/e2e-tests-watch-wssdg/configmaps/e2e-watch-test-label-changed,UID:521065c4-4fea-11ea-a994-fa163e34d433,ResourceVersion:21753339,Generation:0,CreationTimestamp:2020-02-15 11:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 15 11:57:20.638: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wssdg,SelfLink:/api/v1/namespaces/e2e-tests-watch-wssdg/configmaps/e2e-watch-test-label-changed,UID:521065c4-4fea-11ea-a994-fa163e34d433,ResourceVersion:21753340,Generation:0,CreationTimestamp:2020-02-15 11:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 15 11:57:20.638: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wssdg,SelfLink:/api/v1/namespaces/e2e-tests-watch-wssdg/configmaps/e2e-watch-test-label-changed,UID:521065c4-4fea-11ea-a994-fa163e34d433,ResourceVersion:21753341,Generation:0,CreationTimestamp:2020-02-15 11:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 15 11:57:30.962: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wssdg,SelfLink:/api/v1/namespaces/e2e-tests-watch-wssdg/configmaps/e2e-watch-test-label-changed,UID:521065c4-4fea-11ea-a994-fa163e34d433,ResourceVersion:21753355,Generation:0,CreationTimestamp:2020-02-15 11:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 15 11:57:30.963: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wssdg,SelfLink:/api/v1/namespaces/e2e-tests-watch-wssdg/configmaps/e2e-watch-test-label-changed,UID:521065c4-4fea-11ea-a994-fa163e34d433,ResourceVersion:21753356,Generation:0,CreationTimestamp:2020-02-15 11:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 15 11:57:30.964: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wssdg,SelfLink:/api/v1/namespaces/e2e-tests-watch-wssdg/configmaps/e2e-watch-test-label-changed,UID:521065c4-4fea-11ea-a994-fa163e34d433,ResourceVersion:21753357,Generation:0,CreationTimestamp:2020-02-15 11:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 11:57:30.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-wssdg" for this suite. Feb 15 11:57:37.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 11:57:37.278: INFO: namespace: e2e-tests-watch-wssdg, resource: bindings, ignored listing per whitelist Feb 15 11:57:37.349: INFO: namespace e2e-tests-watch-wssdg deletion completed in 6.332384173s • [SLOW TEST:17.028 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 11:57:37.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-bfxn5 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-bfxn5 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-bfxn5 Feb 15 11:57:37.505: INFO: Found 0 stateful pods, waiting for 1 Feb 15 11:57:47.522: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Feb 15 11:57:57.534: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 15 11:57:57.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 11:57:58.239: INFO: stderr: "I0215 11:57:57.774742 2348 log.go:172] (0xc0001386e0) (0xc0007e94a0) Create stream\nI0215 11:57:57.774921 2348 log.go:172] (0xc0001386e0) (0xc0007e94a0) Stream added, broadcasting: 1\nI0215 11:57:57.781950 2348 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0215 11:57:57.782145 2348 log.go:172] (0xc0001386e0) (0xc0007e9540) Create stream\nI0215 11:57:57.782166 2348 log.go:172] (0xc0001386e0) (0xc0007e9540) Stream added, broadcasting: 3\nI0215 11:57:57.784418 2348 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0215 11:57:57.784513 2348 log.go:172] (0xc0001386e0) (0xc0006cc000) Create stream\nI0215 11:57:57.784529 2348 log.go:172] (0xc0001386e0) (0xc0006cc000) Stream added, broadcasting: 5\nI0215 11:57:57.786321 2348 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0215 11:57:58.089113 2348 log.go:172] (0xc0001386e0) Data frame received for 3\nI0215 11:57:58.089215 2348 log.go:172] (0xc0007e9540) (3) Data frame handling\nI0215 11:57:58.089241 2348 log.go:172] (0xc0007e9540) (3) Data frame sent\nI0215 11:57:58.227714 2348 log.go:172] (0xc0001386e0) Data frame received for 1\nI0215 11:57:58.227916 2348 log.go:172] (0xc0001386e0) (0xc0006cc000) Stream removed, broadcasting: 5\nI0215 11:57:58.227985 2348 log.go:172] (0xc0007e94a0) (1) Data frame handling\nI0215 11:57:58.228005 2348 log.go:172] (0xc0007e94a0) (1) Data frame sent\nI0215 11:57:58.228323 2348 log.go:172] (0xc0001386e0) (0xc0007e9540) Stream removed, broadcasting: 3\nI0215 11:57:58.228601 2348 log.go:172] (0xc0001386e0) (0xc0007e94a0) Stream removed, broadcasting: 1\nI0215 11:57:58.228644 2348 log.go:172] (0xc0001386e0) Go away received\nI0215 11:57:58.229601 2348 log.go:172] (0xc0001386e0) (0xc0007e94a0) Stream removed, broadcasting: 1\nI0215 11:57:58.229622 2348 log.go:172] (0xc0001386e0) (0xc0007e9540) Stream removed, broadcasting: 3\nI0215 11:57:58.229629 2348 log.go:172] (0xc0001386e0) (0xc0006cc000) Stream removed, broadcasting: 5\n" Feb 15 11:57:58.240: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 11:57:58.240: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 11:57:58.251: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 15 11:58:08.280: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 15 11:58:08.280: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 11:58:08.361: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 11:58:08.362: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC }] Feb 15 11:58:08.362: INFO: Feb 15 11:58:08.362: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 15 11:58:10.193: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.965807433s Feb 15 11:58:12.048: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.134898925s Feb 15 11:58:13.182: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.281003648s Feb 15 11:58:14.197: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.146479002s Feb 15 11:58:15.213: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.131843803s Feb 15 11:58:16.230: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.1152281s Feb 15 11:58:17.479: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.098369176s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-bfxn5 Feb 15 11:58:18.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:58:20.359: INFO: stderr: "I0215 11:58:19.812355 2371 log.go:172] (0xc0008702c0) (0xc000759220) Create stream\nI0215 11:58:19.812629 2371 log.go:172] (0xc0008702c0) (0xc000759220) Stream added, broadcasting: 1\nI0215 11:58:19.818090 2371 log.go:172] (0xc0008702c0) Reply frame received for 1\nI0215 11:58:19.818129 2371 log.go:172] (0xc0008702c0) (0xc0006e6000) Create stream\nI0215 11:58:19.818137 2371 log.go:172] (0xc0008702c0) (0xc0006e6000) Stream added, broadcasting: 3\nI0215 11:58:19.819001 2371 log.go:172] (0xc0008702c0) Reply frame received for 3\nI0215 11:58:19.819039 2371 log.go:172] (0xc0008702c0) (0xc000434000) Create stream\nI0215 11:58:19.819063 2371 log.go:172] (0xc0008702c0) (0xc000434000) Stream added, broadcasting: 5\nI0215 11:58:19.822440 2371 log.go:172] (0xc0008702c0) Reply frame received for 5\nI0215 11:58:20.051210 2371 log.go:172] (0xc0008702c0) Data frame received for 3\nI0215 11:58:20.051429 2371 log.go:172] (0xc0006e6000) (3) Data frame handling\nI0215 11:58:20.051471 2371 log.go:172] (0xc0006e6000) (3) Data frame sent\nI0215 11:58:20.344724 2371 log.go:172] (0xc0008702c0) Data frame received for 1\nI0215 11:58:20.344900 2371 log.go:172] (0xc000759220) (1) Data frame handling\nI0215 11:58:20.344930 2371 log.go:172] (0xc000759220) (1) Data frame sent\nI0215 11:58:20.344956 2371 log.go:172] (0xc0008702c0) (0xc000759220) Stream removed, broadcasting: 1\nI0215 11:58:20.345206 2371 log.go:172] (0xc0008702c0) (0xc0006e6000) Stream removed, broadcasting: 3\nI0215 11:58:20.345700 2371 log.go:172] (0xc0008702c0) (0xc000434000) Stream removed, broadcasting: 5\nI0215 11:58:20.345878 2371 log.go:172] (0xc0008702c0) Go away received\nI0215 11:58:20.346214 2371 log.go:172] (0xc0008702c0) (0xc000759220) Stream removed, broadcasting: 1\nI0215 11:58:20.346292 2371 log.go:172] (0xc0008702c0) (0xc0006e6000) Stream removed, broadcasting: 3\nI0215 11:58:20.346304 2371 log.go:172] (0xc0008702c0) (0xc000434000) Stream removed, broadcasting: 5\n" Feb 15 11:58:20.359: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 11:58:20.359: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 11:58:20.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:58:20.609: INFO: rc: 1 Feb 15 11:58:20.609: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00243db90 exit status 1 true [0xc001d003d0 0xc001d003e8 0xc001d00400] [0xc001d003d0 0xc001d003e8 0xc001d00400] [0xc001d003e0 0xc001d003f8] [0x935700 0x935700] 0xc001968ae0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 15 11:58:30.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:58:31.209: INFO: stderr: "I0215 11:58:30.851693 2415 log.go:172] (0xc0007282c0) (0xc000752640) Create stream\nI0215 11:58:30.851934 2415 log.go:172] (0xc0007282c0) (0xc000752640) Stream added, broadcasting: 1\nI0215 11:58:30.859607 2415 log.go:172] (0xc0007282c0) Reply frame received for 1\nI0215 11:58:30.859673 2415 log.go:172] (0xc0007282c0) (0xc00065ac80) Create stream\nI0215 11:58:30.859692 2415 log.go:172] (0xc0007282c0) (0xc00065ac80) Stream added, broadcasting: 3\nI0215 11:58:30.861018 2415 log.go:172] (0xc0007282c0) Reply frame received for 3\nI0215 11:58:30.861072 2415 log.go:172] (0xc0007282c0) (0xc0006a4000) Create stream\nI0215 11:58:30.861087 2415 log.go:172] (0xc0007282c0) (0xc0006a4000) Stream added, broadcasting: 5\nI0215 11:58:30.862048 2415 log.go:172] (0xc0007282c0) Reply frame received for 5\nI0215 11:58:31.028376 2415 log.go:172] (0xc0007282c0) Data frame received for 3\nI0215 11:58:31.028831 2415 log.go:172] (0xc00065ac80) (3) Data frame handling\nI0215 11:58:31.028891 2415 log.go:172] (0xc00065ac80) (3) Data frame sent\nI0215 11:58:31.028939 2415 log.go:172] (0xc0007282c0) Data frame received for 5\nI0215 11:58:31.028975 2415 log.go:172] (0xc0006a4000) (5) Data frame handling\nI0215 11:58:31.029017 2415 log.go:172] (0xc0006a4000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0215 11:58:31.193375 2415 log.go:172] (0xc0007282c0) Data frame received for 1\nI0215 11:58:31.193476 2415 log.go:172] (0xc0007282c0) (0xc00065ac80) Stream removed, broadcasting: 3\nI0215 11:58:31.193553 2415 log.go:172] (0xc000752640) (1) Data frame handling\nI0215 11:58:31.193566 2415 log.go:172] (0xc000752640) (1) Data frame sent\nI0215 11:58:31.193574 2415 log.go:172] (0xc0007282c0) (0xc000752640) Stream removed, broadcasting: 1\nI0215 11:58:31.195746 2415 log.go:172] (0xc0007282c0) (0xc0006a4000) Stream removed, broadcasting: 5\nI0215 11:58:31.195931 2415 log.go:172] (0xc0007282c0) Go away received\nI0215 11:58:31.196245 2415 log.go:172] (0xc0007282c0) (0xc000752640) Stream removed, broadcasting: 1\nI0215 11:58:31.196297 2415 log.go:172] (0xc0007282c0) (0xc00065ac80) Stream removed, broadcasting: 3\nI0215 11:58:31.196315 2415 log.go:172] (0xc0007282c0) (0xc0006a4000) Stream removed, broadcasting: 5\n" Feb 15 11:58:31.210: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 11:58:31.210: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 11:58:31.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:58:31.706: INFO: stderr: "I0215 11:58:31.381475 2437 log.go:172] (0xc000734370) (0xc00077a640) Create stream\nI0215 11:58:31.381858 2437 log.go:172] (0xc000734370) (0xc00077a640) Stream added, broadcasting: 1\nI0215 11:58:31.386539 2437 log.go:172] (0xc000734370) Reply frame received for 1\nI0215 11:58:31.386615 2437 log.go:172] (0xc000734370) (0xc000664f00) Create stream\nI0215 11:58:31.386627 2437 log.go:172] (0xc000734370) (0xc000664f00) Stream added, broadcasting: 3\nI0215 11:58:31.387874 2437 log.go:172] (0xc000734370) Reply frame received for 3\nI0215 11:58:31.387897 2437 log.go:172] (0xc000734370) (0xc000614000) Create stream\nI0215 11:58:31.387935 2437 log.go:172] (0xc000734370) (0xc000614000) Stream added, broadcasting: 5\nI0215 11:58:31.389681 2437 log.go:172] (0xc000734370) Reply frame received for 5\nI0215 11:58:31.531183 2437 log.go:172] (0xc000734370) Data frame received for 3\nI0215 11:58:31.531396 2437 log.go:172] (0xc000664f00) (3) Data frame handling\nI0215 11:58:31.531451 2437 log.go:172] (0xc000664f00) (3) Data frame sent\nI0215 11:58:31.531701 2437 log.go:172] (0xc000734370) Data frame received for 5\nI0215 11:58:31.531795 2437 log.go:172] (0xc000614000) (5) Data frame handling\nI0215 11:58:31.531810 2437 log.go:172] (0xc000614000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0215 11:58:31.694633 2437 log.go:172] (0xc000734370) Data frame received for 1\nI0215 11:58:31.694911 2437 log.go:172] (0xc000734370) (0xc000614000) Stream removed, broadcasting: 5\nI0215 11:58:31.695123 2437 log.go:172] (0xc00077a640) (1) Data frame handling\nI0215 11:58:31.695159 2437 log.go:172] (0xc000734370) (0xc000664f00) Stream removed, broadcasting: 3\nI0215 11:58:31.695171 2437 log.go:172] (0xc00077a640) (1) Data frame sent\nI0215 11:58:31.695183 2437 log.go:172] (0xc000734370) (0xc00077a640) Stream removed, broadcasting: 1\nI0215 11:58:31.695223 2437 log.go:172] (0xc000734370) Go away received\nI0215 11:58:31.696112 2437 log.go:172] (0xc000734370) (0xc00077a640) Stream removed, broadcasting: 1\nI0215 11:58:31.696134 2437 log.go:172] (0xc000734370) (0xc000664f00) Stream removed, broadcasting: 3\nI0215 11:58:31.696140 2437 log.go:172] (0xc000734370) (0xc000614000) Stream removed, broadcasting: 5\n" Feb 15 11:58:31.707: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 11:58:31.707: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 11:58:31.725: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 15 11:58:31.725: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 15 11:58:31.725: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 15 11:58:31.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 11:58:32.220: INFO: stderr: "I0215 11:58:31.957720 2458 log.go:172] (0xc0001386e0) (0xc000726640) Create stream\nI0215 11:58:31.957946 2458 log.go:172] (0xc0001386e0) (0xc000726640) Stream added, broadcasting: 1\nI0215 11:58:31.962216 2458 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0215 11:58:31.962249 2458 log.go:172] (0xc0001386e0) (0xc0005d6c80) Create stream\nI0215 11:58:31.962258 2458 log.go:172] (0xc0001386e0) (0xc0005d6c80) Stream added, broadcasting: 3\nI0215 11:58:31.963739 2458 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0215 11:58:31.963766 2458 log.go:172] (0xc0001386e0) (0xc000212000) Create stream\nI0215 11:58:31.963776 2458 log.go:172] (0xc0001386e0) (0xc000212000) Stream added, broadcasting: 5\nI0215 11:58:31.964863 2458 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0215 11:58:32.055840 2458 log.go:172] (0xc0001386e0) Data frame received for 3\nI0215 11:58:32.055929 2458 log.go:172] (0xc0005d6c80) (3) Data frame handling\nI0215 11:58:32.055964 2458 log.go:172] (0xc0005d6c80) (3) Data frame sent\nI0215 11:58:32.206518 2458 log.go:172] (0xc0001386e0) (0xc000212000) Stream removed, broadcasting: 5\nI0215 11:58:32.206896 2458 log.go:172] (0xc0001386e0) Data frame received for 1\nI0215 11:58:32.207050 2458 log.go:172] (0xc0001386e0) (0xc0005d6c80) Stream removed, broadcasting: 3\nI0215 11:58:32.207172 2458 log.go:172] (0xc000726640) (1) Data frame handling\nI0215 11:58:32.207229 2458 log.go:172] (0xc000726640) (1) Data frame sent\nI0215 11:58:32.207247 2458 log.go:172] (0xc0001386e0) (0xc000726640) Stream removed, broadcasting: 1\nI0215 11:58:32.207282 2458 log.go:172] (0xc0001386e0) Go away received\nI0215 11:58:32.207988 2458 log.go:172] (0xc0001386e0) (0xc000726640) Stream removed, broadcasting: 1\nI0215 11:58:32.208017 2458 log.go:172] (0xc0001386e0) (0xc0005d6c80) Stream removed, broadcasting: 3\nI0215 11:58:32.208031 2458 log.go:172] (0xc0001386e0) (0xc000212000) Stream removed, broadcasting: 5\n" Feb 15 11:58:32.220: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 11:58:32.220: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 11:58:32.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 11:58:32.828: INFO: stderr: "I0215 11:58:32.368826 2480 log.go:172] (0xc0001380b0) (0xc0006ae5a0) Create stream\nI0215 11:58:32.369029 2480 log.go:172] (0xc0001380b0) (0xc0006ae5a0) Stream added, broadcasting: 1\nI0215 11:58:32.372929 2480 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0215 11:58:32.372964 2480 log.go:172] (0xc0001380b0) (0xc000546c80) Create stream\nI0215 11:58:32.372974 2480 log.go:172] (0xc0001380b0) (0xc000546c80) Stream added, broadcasting: 3\nI0215 11:58:32.373784 2480 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0215 11:58:32.373824 2480 log.go:172] (0xc0001380b0) (0xc0007a2000) Create stream\nI0215 11:58:32.373830 2480 log.go:172] (0xc0001380b0) (0xc0007a2000) Stream added, broadcasting: 5\nI0215 11:58:32.374571 2480 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0215 11:58:32.605471 2480 log.go:172] (0xc0001380b0) Data frame received for 3\nI0215 11:58:32.605556 2480 log.go:172] (0xc000546c80) (3) Data frame handling\nI0215 11:58:32.605574 2480 log.go:172] (0xc000546c80) (3) Data frame sent\nI0215 11:58:32.814248 2480 log.go:172] (0xc0001380b0) Data frame received for 1\nI0215 11:58:32.814409 2480 log.go:172] (0xc0006ae5a0) (1) Data frame handling\nI0215 11:58:32.814456 2480 log.go:172] (0xc0006ae5a0) (1) Data frame sent\nI0215 11:58:32.814478 2480 log.go:172] (0xc0001380b0) (0xc0006ae5a0) Stream removed, broadcasting: 1\nI0215 11:58:32.814798 2480 log.go:172] (0xc0001380b0) (0xc0007a2000) Stream removed, broadcasting: 5\nI0215 11:58:32.814996 2480 log.go:172] (0xc0001380b0) (0xc000546c80) Stream removed, broadcasting: 3\nI0215 11:58:32.815554 2480 log.go:172] (0xc0001380b0) (0xc0006ae5a0) Stream removed, broadcasting: 1\nI0215 11:58:32.815609 2480 log.go:172] (0xc0001380b0) (0xc000546c80) Stream removed, broadcasting: 3\nI0215 11:58:32.815622 2480 log.go:172] (0xc0001380b0) (0xc0007a2000) Stream removed, broadcasting: 5\nI0215 11:58:32.815763 2480 log.go:172] (0xc0001380b0) Go away received\n" Feb 15 11:58:32.828: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 11:58:32.828: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 11:58:32.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 11:58:33.436: INFO: stderr: "I0215 11:58:33.119768 2500 log.go:172] (0xc0006fc370) (0xc00072a640) Create stream\nI0215 11:58:33.120098 2500 log.go:172] (0xc0006fc370) (0xc00072a640) Stream added, broadcasting: 1\nI0215 11:58:33.126310 2500 log.go:172] (0xc0006fc370) Reply frame received for 1\nI0215 11:58:33.126358 2500 log.go:172] (0xc0006fc370) (0xc00038ee60) Create stream\nI0215 11:58:33.126380 2500 log.go:172] (0xc0006fc370) (0xc00038ee60) Stream added, broadcasting: 3\nI0215 11:58:33.127893 2500 log.go:172] (0xc0006fc370) Reply frame received for 3\nI0215 11:58:33.127928 2500 log.go:172] (0xc0006fc370) (0xc00038efa0) Create stream\nI0215 11:58:33.127939 2500 log.go:172] (0xc0006fc370) (0xc00038efa0) Stream added, broadcasting: 5\nI0215 11:58:33.132282 2500 log.go:172] (0xc0006fc370) Reply frame received for 5\nI0215 11:58:33.318107 2500 log.go:172] (0xc0006fc370) Data frame received for 3\nI0215 11:58:33.318199 2500 log.go:172] (0xc00038ee60) (3) Data frame handling\nI0215 11:58:33.318233 2500 log.go:172] (0xc00038ee60) (3) Data frame sent\nI0215 11:58:33.426310 2500 log.go:172] (0xc0006fc370) Data frame received for 1\nI0215 11:58:33.426690 2500 log.go:172] (0xc0006fc370) (0xc00038efa0) Stream removed, broadcasting: 5\nI0215 11:58:33.426753 2500 log.go:172] (0xc00072a640) (1) Data frame handling\nI0215 11:58:33.426785 2500 log.go:172] (0xc00072a640) (1) Data frame sent\nI0215 11:58:33.426850 2500 log.go:172] (0xc0006fc370) (0xc00038ee60) Stream removed, broadcasting: 3\nI0215 11:58:33.426894 2500 log.go:172] (0xc0006fc370) (0xc00072a640) Stream removed, broadcasting: 1\nI0215 11:58:33.426910 2500 log.go:172] (0xc0006fc370) Go away received\nI0215 11:58:33.427852 2500 log.go:172] (0xc0006fc370) (0xc00072a640) Stream removed, broadcasting: 1\nI0215 11:58:33.427866 2500 log.go:172] (0xc0006fc370) (0xc00038ee60) Stream removed, broadcasting: 3\nI0215 11:58:33.427878 2500 log.go:172] (0xc0006fc370) (0xc00038efa0) Stream removed, broadcasting: 5\n" Feb 15 11:58:33.436: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 11:58:33.436: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 11:58:33.436: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 11:58:33.445: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 15 11:58:43.514: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 15 11:58:43.514: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 15 11:58:43.514: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 15 11:58:43.614: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 11:58:43.615: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC }] Feb 15 11:58:43.615: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:43.615: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:43.615: INFO: Feb 15 11:58:43.615: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 11:58:44.659: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 11:58:44.659: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC }] Feb 15 11:58:44.660: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:44.660: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:44.660: INFO: Feb 15 11:58:44.660: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 11:58:45.685: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 11:58:45.685: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC }] Feb 15 11:58:45.686: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:45.686: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:45.686: INFO: Feb 15 11:58:45.686: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 11:58:46.712: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 11:58:46.712: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC }] Feb 15 11:58:46.712: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:46.712: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:46.712: INFO: Feb 15 11:58:46.712: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 11:58:47.728: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 11:58:47.729: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC }] Feb 15 11:58:47.729: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:47.729: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:47.729: INFO: Feb 15 11:58:47.729: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 11:58:48.884: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 11:58:48.885: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC }] Feb 15 11:58:48.885: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:48.886: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:48.886: INFO: Feb 15 11:58:48.886: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 11:58:50.003: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 11:58:50.003: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC }] Feb 15 11:58:50.003: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:50.003: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:50.003: INFO: Feb 15 11:58:50.003: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 11:58:51.024: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 11:58:51.024: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC }] Feb 15 11:58:51.024: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:51.024: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:51.024: INFO: Feb 15 11:58:51.024: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 11:58:52.053: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 11:58:52.054: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC }] Feb 15 11:58:52.054: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:52.054: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:52.054: INFO: Feb 15 11:58:52.054: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 11:58:53.064: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 11:58:53.065: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:57:37 +0000 UTC }] Feb 15 11:58:53.065: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:53.065: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 11:58:08 +0000 UTC }] Feb 15 11:58:53.065: INFO: Feb 15 11:58:53.065: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-bfxn5 Feb 15 11:58:54.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:58:54.248: INFO: rc: 1 Feb 15 11:58:54.248: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002475560 exit status 1 true [0xc0012b00a0 0xc0012b00b8 0xc0012b00d0] [0xc0012b00a0 0xc0012b00b8 0xc0012b00d0] [0xc0012b00b0 0xc0012b00c8] [0x935700 0x935700] 0xc0024728a0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 15 11:59:04.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:59:04.404: INFO: rc: 1 Feb 15 11:59:04.404: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024756e0 exit status 1 true [0xc0012b00d8 0xc0012b00f0 0xc0012b0108] [0xc0012b00d8 0xc0012b00f0 0xc0012b0108] [0xc0012b00e8 0xc0012b0100] [0x935700 0x935700] 0xc002472b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 11:59:14.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:59:15.206: INFO: rc: 1 Feb 15 11:59:15.207: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00171e120 exit status 1 true [0xc00111c000 0xc00111c030 0xc00111c078] [0xc00111c000 0xc00111c030 0xc00111c078] [0xc00111c028 0xc00111c060] [0x935700 0x935700] 0xc0024ca1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 11:59:25.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:59:25.404: INFO: rc: 1 Feb 15 11:59:25.404: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002475800 exit status 1 true [0xc0012b0110 0xc0012b0128 0xc0012b0140] [0xc0012b0110 0xc0012b0128 0xc0012b0140] [0xc0012b0120 0xc0012b0138] [0x935700 0x935700] 0xc002472de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 11:59:35.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:59:35.584: INFO: rc: 1 Feb 15 11:59:35.584: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00171e270 exit status 1 true [0xc00111c080 0xc00111c0a0 0xc00111c0b8] [0xc00111c080 0xc00111c0a0 0xc00111c0b8] [0xc00111c098 0xc00111c0b0] [0x935700 0x935700] 0xc0024ca4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 11:59:45.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:59:45.799: INFO: rc: 1 Feb 15 11:59:45.799: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00171e390 exit status 1 true [0xc00111c0c0 0xc00111c0d8 0xc00111c0f0] [0xc00111c0c0 0xc00111c0d8 0xc00111c0f0] [0xc00111c0d0 0xc00111c0e8] [0x935700 0x935700] 0xc0024ca780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 11:59:55.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 11:59:55.995: INFO: rc: 1 Feb 15 11:59:55.996: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00171e4b0 exit status 1 true [0xc00111c0f8 0xc00111c110 0xc00111c128] [0xc00111c0f8 0xc00111c110 0xc00111c128] [0xc00111c108 0xc00111c120] [0x935700 0x935700] 0xc0024cbb60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:00:05.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:00:06.145: INFO: rc: 1 Feb 15 12:00:06.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00171e750 exit status 1 true [0xc00111c130 0xc00111c148 0xc00111c160] [0xc00111c130 0xc00111c148 0xc00111c160] [0xc00111c140 0xc00111c158] [0x935700 0x935700] 0xc0024cbe00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:00:16.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:00:16.314: INFO: rc: 1 Feb 15 12:00:16.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002475950 exit status 1 true [0xc0012b0148 0xc0012b0160 0xc0012b0178] [0xc0012b0148 0xc0012b0160 0xc0012b0178] [0xc0012b0158 0xc0012b0170] [0x935700 0x935700] 0xc002473080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:00:26.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:00:26.583: INFO: rc: 1 Feb 15 12:00:26.584: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001be7710 exit status 1 true [0xc0003501c8 0xc0003501e0 0xc0003501f8] [0xc0003501c8 0xc0003501e0 0xc0003501f8] [0xc0003501d8 0xc0003501f0] [0x935700 0x935700] 0xc0024127e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:00:36.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:00:36.789: INFO: rc: 1 Feb 15 12:00:36.790: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002475aa0 exit status 1 true [0xc0012b0180 0xc0012b0198 0xc0012b01b0] [0xc0012b0180 0xc0012b0198 0xc0012b01b0] [0xc0012b0190 0xc0012b01a8] [0x935700 0x935700] 0xc002473320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:00:46.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:00:46.932: INFO: rc: 1 Feb 15 12:00:46.933: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ae8120 exit status 1 true [0xc00111c000 0xc00111c030 0xc00111c078] [0xc00111c000 0xc00111c030 0xc00111c078] [0xc00111c028 0xc00111c060] [0x935700 0x935700] 0xc0024721e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:00:56.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:00:57.116: INFO: rc: 1 Feb 15 12:00:57.116: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ae8240 exit status 1 true [0xc00111c080 0xc00111c0a0 0xc00111c0b8] [0xc00111c080 0xc00111c0a0 0xc00111c0b8] [0xc00111c098 0xc00111c0b0] [0x935700 0x935700] 0xc002472480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:01:07.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:01:07.262: INFO: rc: 1 Feb 15 12:01:07.263: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001be6120 exit status 1 true [0xc0012b0000 0xc0012b0018 0xc0012b0030] [0xc0012b0000 0xc0012b0018 0xc0012b0030] [0xc0012b0010 0xc0012b0028] [0x935700 0x935700] 0xc0024121e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:01:17.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:01:17.447: INFO: rc: 1 Feb 15 12:01:17.447: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ae8360 exit status 1 true [0xc00111c0c0 0xc00111c0d8 0xc00111c0f0] [0xc00111c0c0 0xc00111c0d8 0xc00111c0f0] [0xc00111c0d0 0xc00111c0e8] [0x935700 0x935700] 0xc002472840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:01:27.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:01:27.682: INFO: rc: 1 Feb 15 12:01:27.682: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020d8150 exit status 1 true [0xc00044c020 0xc00044c068 0xc00044c088] [0xc00044c020 0xc00044c068 0xc00044c088] [0xc00044c058 0xc00044c078] [0x935700 0x935700] 0xc00197c1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:01:37.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:01:37.867: INFO: rc: 1 Feb 15 12:01:37.868: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020d82a0 exit status 1 true [0xc00044c0c0 0xc00044c128 0xc00044c190] [0xc00044c0c0 0xc00044c128 0xc00044c190] [0xc00044c100 0xc00044c170] [0x935700 0x935700] 0xc00197c480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:01:47.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:01:48.035: INFO: rc: 1 Feb 15 12:01:48.036: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001be6240 exit status 1 true [0xc0012b0038 0xc0012b0050 0xc0012b0068] [0xc0012b0038 0xc0012b0050 0xc0012b0068] [0xc0012b0048 0xc0012b0060] [0x935700 0x935700] 0xc002412480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:01:58.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:01:58.193: INFO: rc: 1 Feb 15 12:01:58.193: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001be6390 exit status 1 true [0xc0012b0070 0xc0012b0088 0xc0012b00a0] [0xc0012b0070 0xc0012b0088 0xc0012b00a0] [0xc0012b0080 0xc0012b0098] [0x935700 0x935700] 0xc002412720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:02:08.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:02:08.354: INFO: rc: 1 Feb 15 12:02:08.355: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ae84b0 exit status 1 true [0xc00111c0f8 0xc00111c110 0xc00111c128] [0xc00111c0f8 0xc00111c110 0xc00111c128] [0xc00111c108 0xc00111c120] [0x935700 0x935700] 0xc002472ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:02:18.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:02:18.584: INFO: rc: 1 Feb 15 12:02:18.585: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020d8540 exit status 1 true [0xc00044c1a0 0xc00044c1d0 0xc00044c1f0] [0xc00044c1a0 0xc00044c1d0 0xc00044c1f0] [0xc00044c1b8 0xc00044c1e8] [0x935700 0x935700] 0xc00197c720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:02:28.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:02:28.716: INFO: rc: 1 Feb 15 12:02:28.717: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002474150 exit status 1 true [0xc0003500b0 0xc0003500d8 0xc0003500f0] [0xc0003500b0 0xc0003500d8 0xc0003500f0] [0xc0003500d0 0xc0003500e8] [0x935700 0x935700] 0xc00254a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:02:38.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:02:38.861: INFO: rc: 1 Feb 15 12:02:38.862: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ae8600 exit status 1 true [0xc00111c130 0xc00111c148 0xc00111c160] [0xc00111c130 0xc00111c148 0xc00111c160] [0xc00111c140 0xc00111c158] [0x935700 0x935700] 0xc002472d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:02:48.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:02:49.025: INFO: rc: 1 Feb 15 12:02:49.025: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001be6150 exit status 1 true [0xc0012b0000 0xc0012b0018 0xc0012b0030] [0xc0012b0000 0xc0012b0018 0xc0012b0030] [0xc0012b0010 0xc0012b0028] [0x935700 0x935700] 0xc0024121e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:02:59.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:02:59.207: INFO: rc: 1 Feb 15 12:02:59.208: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ae8150 exit status 1 true [0xc00111c000 0xc00111c030 0xc00111c078] [0xc00111c000 0xc00111c030 0xc00111c078] [0xc00111c028 0xc00111c060] [0x935700 0x935700] 0xc0024721e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:03:09.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:03:09.355: INFO: rc: 1 Feb 15 12:03:09.356: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ae82a0 exit status 1 true [0xc00111c080 0xc00111c0a0 0xc00111c0b8] [0xc00111c080 0xc00111c0a0 0xc00111c0b8] [0xc00111c098 0xc00111c0b0] [0x935700 0x935700] 0xc002472480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:03:19.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:03:19.530: INFO: rc: 1 Feb 15 12:03:19.530: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020d8120 exit status 1 true [0xc0003500b0 0xc0003500d8 0xc0003500f0] [0xc0003500b0 0xc0003500d8 0xc0003500f0] [0xc0003500d0 0xc0003500e8] [0x935700 0x935700] 0xc00254a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:03:29.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:03:29.759: INFO: rc: 1 Feb 15 12:03:29.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001be62a0 exit status 1 true [0xc0012b0038 0xc0012b0050 0xc0012b0068] [0xc0012b0038 0xc0012b0050 0xc0012b0068] [0xc0012b0048 0xc0012b0060] [0x935700 0x935700] 0xc002412480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:03:39.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:03:39.958: INFO: rc: 1 Feb 15 12:03:39.958: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ae8450 exit status 1 true [0xc00111c0c0 0xc00111c0d8 0xc00111c0f0] [0xc00111c0c0 0xc00111c0d8 0xc00111c0f0] [0xc00111c0d0 0xc00111c0e8] [0x935700 0x935700] 0xc002472840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:03:49.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:03:50.151: INFO: rc: 1 Feb 15 12:03:50.152: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020d8270 exit status 1 true [0xc0003500f8 0xc000350110 0xc000350140] [0xc0003500f8 0xc000350110 0xc000350140] [0xc000350108 0xc000350128] [0x935700 0x935700] 0xc00254a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 12:04:00.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bfxn5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 12:04:00.292: INFO: rc: 1 Feb 15 12:04:00.293: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 15 12:04:00.293: INFO: Scaling statefulset ss to 0 Feb 15 12:04:00.315: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 15 12:04:00.320: INFO: Deleting all statefulset in ns e2e-tests-statefulset-bfxn5 Feb 15 12:04:00.322: INFO: Scaling statefulset ss to 0 Feb 15 12:04:00.335: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 12:04:00.343: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:04:00.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-bfxn5" for this suite. Feb 15 12:04:08.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:04:08.759: INFO: namespace: e2e-tests-statefulset-bfxn5, resource: bindings, ignored listing per whitelist Feb 15 12:04:08.786: INFO: namespace e2e-tests-statefulset-bfxn5 deletion completed in 8.408224646s • [SLOW TEST:391.436 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:04:08.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 15 12:04:21.630: INFO: Successfully updated pod "labelsupdate45793e1f-4feb-11ea-960a-0242ac110007" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:04:23.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sdzts" for this suite. Feb 15 12:04:45.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:04:45.867: INFO: namespace: e2e-tests-projected-sdzts, resource: bindings, ignored listing per whitelist Feb 15 12:04:45.953: INFO: namespace e2e-tests-projected-sdzts deletion completed in 22.23271861s • [SLOW TEST:37.167 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:04:45.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-5bb33cba-4feb-11ea-960a-0242ac110007 STEP: Creating secret with name secret-projected-all-test-volume-5bb33c4c-4feb-11ea-960a-0242ac110007 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 15 12:04:46.321: INFO: Waiting up to 5m0s for pod "projected-volume-5bb33a59-4feb-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-8d7n7" to be "success or failure" Feb 15 12:04:46.341: INFO: Pod "projected-volume-5bb33a59-4feb-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 20.167359ms Feb 15 12:04:48.385: INFO: Pod "projected-volume-5bb33a59-4feb-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063501672s Feb 15 12:04:50.397: INFO: Pod "projected-volume-5bb33a59-4feb-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07593404s Feb 15 12:04:52.421: INFO: Pod "projected-volume-5bb33a59-4feb-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099729411s Feb 15 12:04:54.438: INFO: Pod "projected-volume-5bb33a59-4feb-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117080703s Feb 15 12:04:56.453: INFO: Pod "projected-volume-5bb33a59-4feb-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132370101s STEP: Saw pod success Feb 15 12:04:56.454: INFO: Pod "projected-volume-5bb33a59-4feb-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:04:56.459: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-5bb33a59-4feb-11ea-960a-0242ac110007 container projected-all-volume-test: STEP: delete the pod Feb 15 12:04:57.400: INFO: Waiting for pod projected-volume-5bb33a59-4feb-11ea-960a-0242ac110007 to disappear Feb 15 12:04:57.704: INFO: Pod projected-volume-5bb33a59-4feb-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:04:57.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8d7n7" for this suite. Feb 15 12:05:03.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:05:04.126: INFO: namespace: e2e-tests-projected-8d7n7, resource: bindings, ignored listing per whitelist Feb 15 12:05:04.194: INFO: namespace e2e-tests-projected-8d7n7 deletion completed in 6.460394483s • [SLOW TEST:18.240 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:05:04.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 15 12:05:04.508: INFO: Waiting up to 5m0s for pod "pod-668f2081-4feb-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-zv8mz" to be "success or failure" Feb 15 12:05:04.529: INFO: Pod "pod-668f2081-4feb-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 20.671431ms Feb 15 12:05:08.755: INFO: Pod "pod-668f2081-4feb-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.246808244s Feb 15 12:05:10.790: INFO: Pod "pod-668f2081-4feb-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281879895s Feb 15 12:05:13.116: INFO: Pod "pod-668f2081-4feb-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.60819325s Feb 15 12:05:15.143: INFO: Pod "pod-668f2081-4feb-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.634447777s Feb 15 12:05:17.156: INFO: Pod "pod-668f2081-4feb-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.648350865s STEP: Saw pod success Feb 15 12:05:17.157: INFO: Pod "pod-668f2081-4feb-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:05:17.161: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-668f2081-4feb-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 12:05:18.091: INFO: Waiting for pod pod-668f2081-4feb-11ea-960a-0242ac110007 to disappear Feb 15 12:05:18.100: INFO: Pod pod-668f2081-4feb-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:05:18.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zv8mz" for this suite. Feb 15 12:05:24.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:05:24.399: INFO: namespace: e2e-tests-emptydir-zv8mz, resource: bindings, ignored listing per whitelist Feb 15 12:05:24.404: INFO: namespace e2e-tests-emptydir-zv8mz deletion completed in 6.296952375s • [SLOW TEST:20.209 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:05:24.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-9lc7w in namespace e2e-tests-proxy-48qgp I0215 12:05:24.816348 8 runners.go:184] Created replication controller with name: proxy-service-9lc7w, namespace: e2e-tests-proxy-48qgp, replica count: 1 I0215 12:05:25.867395 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:05:26.868002 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:05:27.868640 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:05:28.870145 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:05:29.871066 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:05:30.871530 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:05:31.872087 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:05:32.872537 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:05:33.873486 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:05:34.874255 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0215 12:05:35.875057 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0215 12:05:36.875804 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0215 12:05:37.876689 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0215 12:05:38.877927 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0215 12:05:39.882840 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0215 12:05:40.883684 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0215 12:05:41.884939 8 runners.go:184] proxy-service-9lc7w Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 15 12:05:41.903: INFO: setup took 17.172669458s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 15 12:05:41.937: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-48qgp/pods/http:proxy-service-9lc7w-qmd65:160/proxy/: foo (200; 33.340489ms) Feb 15 12:05:41.957: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-48qgp/pods/http:proxy-service-9lc7w-qmd65:162/proxy/: bar (200; 53.112673ms) Feb 15 12:05:41.968: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-48qgp/pods/proxy-service-9lc7w-qmd65:160/proxy/: foo (200; 64.461546ms) Feb 15 12:05:41.970: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-48qgp/pods/proxy-service-9lc7w-qmd65:162/proxy/: bar (200; 66.543646ms) Feb 15 12:05:41.975: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-48qgp/pods/http:proxy-service-9lc7w-qmd65:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 15 12:09:05.732: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:05.771: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:07.771: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:07.785: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:09.771: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:09.798: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:11.771: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:11.797: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:13.772: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:13.808: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:15.771: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:15.790: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:17.771: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:17.790: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:19.771: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:19.790: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:21.771: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:21.789: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:23.772: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:23.897: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:25.771: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:25.785: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:27.771: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:27.899: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:29.771: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:29.796: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:31.772: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:31.809: INFO: Pod pod-with-poststart-exec-hook still exists Feb 15 12:09:33.772: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 15 12:09:33.801: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:09:33.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gmnjt" for this suite. Feb 15 12:10:05.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:10:05.959: INFO: namespace: e2e-tests-container-lifecycle-hook-gmnjt, resource: bindings, ignored listing per whitelist Feb 15 12:10:06.070: INFO: namespace e2e-tests-container-lifecycle-hook-gmnjt deletion completed in 32.232364569s • [SLOW TEST:247.060 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:10:06.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 15 12:10:06.432: INFO: Waiting up to 5m0s for pod "downward-api-1a7a989a-4fec-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-tdnfv" to be "success or failure" Feb 15 12:10:06.450: INFO: Pod "downward-api-1a7a989a-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.331317ms Feb 15 12:10:08.544: INFO: Pod "downward-api-1a7a989a-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111502018s Feb 15 12:10:10.575: INFO: Pod "downward-api-1a7a989a-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142021243s Feb 15 12:10:12.744: INFO: Pod "downward-api-1a7a989a-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311264505s Feb 15 12:10:14.773: INFO: Pod "downward-api-1a7a989a-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.340921994s Feb 15 12:10:16.787: INFO: Pod "downward-api-1a7a989a-4fec-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.354673281s STEP: Saw pod success Feb 15 12:10:16.788: INFO: Pod "downward-api-1a7a989a-4fec-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:10:16.795: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1a7a989a-4fec-11ea-960a-0242ac110007 container dapi-container: STEP: delete the pod Feb 15 12:10:17.848: INFO: Waiting for pod downward-api-1a7a989a-4fec-11ea-960a-0242ac110007 to disappear Feb 15 12:10:17.867: INFO: Pod downward-api-1a7a989a-4fec-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:10:17.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tdnfv" for this suite. Feb 15 12:10:23.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:10:24.162: INFO: namespace: e2e-tests-downward-api-tdnfv, resource: bindings, ignored listing per whitelist Feb 15 12:10:24.209: INFO: namespace e2e-tests-downward-api-tdnfv deletion completed in 6.319405406s • [SLOW TEST:18.138 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:10:24.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:10:34.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-87lnp" for this suite. Feb 15 12:11:16.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:11:16.764: INFO: namespace: e2e-tests-kubelet-test-87lnp, resource: bindings, ignored listing per whitelist Feb 15 12:11:16.838: INFO: namespace e2e-tests-kubelet-test-87lnp deletion completed in 42.208722036s • [SLOW TEST:52.629 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:11:16.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:11:17.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-2vrsq" for this suite. Feb 15 12:11:25.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:11:25.166: INFO: namespace: e2e-tests-services-2vrsq, resource: bindings, ignored listing per whitelist Feb 15 12:11:25.265: INFO: namespace e2e-tests-services-2vrsq deletion completed in 8.195656373s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:8.425 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:11:25.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 12:11:25.521: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-l6zmf" to be "success or failure" Feb 15 12:11:25.596: INFO: Pod "downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 74.605473ms Feb 15 12:11:27.856: INFO: Pod "downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335179141s Feb 15 12:11:29.878: INFO: Pod "downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357220026s Feb 15 12:11:32.108: INFO: Pod "downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.586809938s Feb 15 12:11:34.128: INFO: Pod "downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607371137s Feb 15 12:11:36.180: INFO: Pod "downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.65939569s Feb 15 12:11:38.579: INFO: Pod "downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.057564235s STEP: Saw pod success Feb 15 12:11:38.579: INFO: Pod "downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:11:38.640: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 12:11:38.893: INFO: Waiting for pod downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007 to disappear Feb 15 12:11:38.929: INFO: Pod downwardapi-volume-49aa8167-4fec-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:11:38.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-l6zmf" for this suite. Feb 15 12:11:44.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:11:45.056: INFO: namespace: e2e-tests-downward-api-l6zmf, resource: bindings, ignored listing per whitelist Feb 15 12:11:45.103: INFO: namespace e2e-tests-downward-api-l6zmf deletion completed in 6.162061828s • [SLOW TEST:19.838 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:11:45.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-5579a5c5-4fec-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 12:11:45.335: INFO: Waiting up to 5m0s for pod "pod-secrets-557aa307-4fec-11ea-960a-0242ac110007" in namespace "e2e-tests-secrets-bn2cr" to be "success or failure" Feb 15 12:11:45.379: INFO: Pod "pod-secrets-557aa307-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 43.7929ms Feb 15 12:11:47.392: INFO: Pod "pod-secrets-557aa307-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056212134s Feb 15 12:11:49.405: INFO: Pod "pod-secrets-557aa307-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069143449s Feb 15 12:11:51.431: INFO: Pod "pod-secrets-557aa307-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095228919s Feb 15 12:11:53.444: INFO: Pod "pod-secrets-557aa307-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10874073s Feb 15 12:11:55.469: INFO: Pod "pod-secrets-557aa307-4fec-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.133507677s STEP: Saw pod success Feb 15 12:11:55.469: INFO: Pod "pod-secrets-557aa307-4fec-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:11:55.485: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-557aa307-4fec-11ea-960a-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 15 12:11:56.261: INFO: Waiting for pod pod-secrets-557aa307-4fec-11ea-960a-0242ac110007 to disappear Feb 15 12:11:56.331: INFO: Pod pod-secrets-557aa307-4fec-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:11:56.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bn2cr" for this suite. Feb 15 12:12:02.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:12:02.963: INFO: namespace: e2e-tests-secrets-bn2cr, resource: bindings, ignored listing per whitelist Feb 15 12:12:02.991: INFO: namespace e2e-tests-secrets-bn2cr deletion completed in 6.58538519s • [SLOW TEST:17.887 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:12:02.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 15 12:12:23.645: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:23.684: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:25.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:25.708: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:27.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:27.797: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:29.685: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:29.730: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:31.685: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:31.701: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:33.685: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:33.702: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:35.685: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:35.747: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:37.685: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:37.719: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:39.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:39.718: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:41.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:41.777: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:43.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:43.706: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:45.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:45.704: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:47.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:47.703: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:49.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:49.705: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:51.685: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:51.700: INFO: Pod pod-with-prestop-exec-hook still exists Feb 15 12:12:53.685: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 15 12:12:53.762: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:12:53.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4rg9p" for this suite. Feb 15 12:13:17.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:13:18.006: INFO: namespace: e2e-tests-container-lifecycle-hook-4rg9p, resource: bindings, ignored listing per whitelist Feb 15 12:13:18.048: INFO: namespace e2e-tests-container-lifecycle-hook-4rg9p deletion completed in 24.243946314s • [SLOW TEST:75.057 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:13:18.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-8cf28ee4-4fec-11ea-960a-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 15 12:13:18.505: INFO: Waiting up to 5m0s for pod "pod-configmaps-8cfcbc10-4fec-11ea-960a-0242ac110007" in namespace "e2e-tests-configmap-n7xtq" to be "success or failure" Feb 15 12:13:18.637: INFO: Pod "pod-configmaps-8cfcbc10-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 131.772721ms Feb 15 12:13:20.650: INFO: Pod "pod-configmaps-8cfcbc10-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144723285s Feb 15 12:13:22.677: INFO: Pod "pod-configmaps-8cfcbc10-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171919253s Feb 15 12:13:24.722: INFO: Pod "pod-configmaps-8cfcbc10-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216722535s Feb 15 12:13:26.740: INFO: Pod "pod-configmaps-8cfcbc10-4fec-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.234471302s STEP: Saw pod success Feb 15 12:13:26.740: INFO: Pod "pod-configmaps-8cfcbc10-4fec-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:13:26.756: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8cfcbc10-4fec-11ea-960a-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 15 12:13:26.877: INFO: Waiting for pod pod-configmaps-8cfcbc10-4fec-11ea-960a-0242ac110007 to disappear Feb 15 12:13:26.976: INFO: Pod pod-configmaps-8cfcbc10-4fec-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:13:26.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-n7xtq" for this suite. Feb 15 12:13:33.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:13:33.287: INFO: namespace: e2e-tests-configmap-n7xtq, resource: bindings, ignored listing per whitelist Feb 15 12:13:33.297: INFO: namespace e2e-tests-configmap-n7xtq deletion completed in 6.302728414s • [SLOW TEST:15.249 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:13:33.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0215 12:14:04.680843 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 15 12:14:04.681: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:14:04.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-r4jn8" for this suite. Feb 15 12:14:14.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:14:15.601: INFO: namespace: e2e-tests-gc-r4jn8, resource: bindings, ignored listing per whitelist Feb 15 12:14:15.604: INFO: namespace e2e-tests-gc-r4jn8 deletion completed in 10.919452665s • [SLOW TEST:42.307 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:14:15.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-af8be3a6-4fec-11ea-960a-0242ac110007 STEP: Creating configMap with name cm-test-opt-upd-af8be471-4fec-11ea-960a-0242ac110007 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-af8be3a6-4fec-11ea-960a-0242ac110007 STEP: Updating configmap cm-test-opt-upd-af8be471-4fec-11ea-960a-0242ac110007 STEP: Creating configMap with name cm-test-opt-create-af8be49d-4fec-11ea-960a-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:14:33.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nfxgx" for this suite. Feb 15 12:15:05.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:15:05.395: INFO: namespace: e2e-tests-configmap-nfxgx, resource: bindings, ignored listing per whitelist Feb 15 12:15:05.584: INFO: namespace e2e-tests-configmap-nfxgx deletion completed in 32.317318631s • [SLOW TEST:49.980 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:15:05.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-cngj8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cngj8 to expose endpoints map[] Feb 15 12:15:05.974: INFO: Get endpoints failed (31.486461ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 15 12:15:06.991: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cngj8 exposes endpoints map[] (1.047983292s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-cngj8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cngj8 to expose endpoints map[pod1:[100]] Feb 15 12:15:11.423: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.409469322s elapsed, will retry) Feb 15 12:15:18.449: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cngj8 exposes endpoints map[pod1:[100]] (11.435300398s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-cngj8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cngj8 to expose endpoints map[pod1:[100] pod2:[101]] Feb 15 12:15:24.407: INFO: Unexpected endpoints: found map[cdb09a75-4fec-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.936995487s elapsed, will retry) Feb 15 12:15:26.479: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cngj8 exposes endpoints map[pod1:[100] pod2:[101]] (8.009108156s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-cngj8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cngj8 to expose endpoints map[pod2:[101]] Feb 15 12:15:27.666: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cngj8 exposes endpoints map[pod2:[101]] (1.167024675s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-cngj8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cngj8 to expose endpoints map[] Feb 15 12:15:29.047: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cngj8 exposes endpoints map[] (1.360719831s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:15:30.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-cngj8" for this suite. Feb 15 12:15:54.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:15:54.398: INFO: namespace: e2e-tests-services-cngj8, resource: bindings, ignored listing per whitelist Feb 15 12:15:54.428: INFO: namespace e2e-tests-services-cngj8 deletion completed in 24.359161896s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:48.843 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:15:54.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 15 12:15:54.807: INFO: Waiting up to 5m0s for pod "pod-ea2dde42-4fec-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-nqg5p" to be "success or failure" Feb 15 12:15:54.842: INFO: Pod "pod-ea2dde42-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 34.105561ms Feb 15 12:15:57.061: INFO: Pod "pod-ea2dde42-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253143307s Feb 15 12:15:59.850: INFO: Pod "pod-ea2dde42-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.042546156s Feb 15 12:16:01.877: INFO: Pod "pod-ea2dde42-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.069267782s Feb 15 12:16:03.904: INFO: Pod "pod-ea2dde42-4fec-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.096600724s STEP: Saw pod success Feb 15 12:16:03.905: INFO: Pod "pod-ea2dde42-4fec-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:16:03.921: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ea2dde42-4fec-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 12:16:04.527: INFO: Waiting for pod pod-ea2dde42-4fec-11ea-960a-0242ac110007 to disappear Feb 15 12:16:04.539: INFO: Pod pod-ea2dde42-4fec-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:16:04.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nqg5p" for this suite. Feb 15 12:16:10.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:16:13.826: INFO: namespace: e2e-tests-emptydir-nqg5p, resource: bindings, ignored listing per whitelist Feb 15 12:16:14.091: INFO: namespace e2e-tests-emptydir-nqg5p deletion completed in 9.540139891s • [SLOW TEST:19.662 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:16:14.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-f5f2d079-4fec-11ea-960a-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 15 12:16:14.618: INFO: Waiting up to 5m0s for pod "pod-configmaps-f5f65762-4fec-11ea-960a-0242ac110007" in namespace "e2e-tests-configmap-btnqt" to be "success or failure" Feb 15 12:16:14.893: INFO: Pod "pod-configmaps-f5f65762-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 274.835814ms Feb 15 12:16:17.257: INFO: Pod "pod-configmaps-f5f65762-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.638167167s Feb 15 12:16:19.282: INFO: Pod "pod-configmaps-f5f65762-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663352603s Feb 15 12:16:21.353: INFO: Pod "pod-configmaps-f5f65762-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.734014271s Feb 15 12:16:23.387: INFO: Pod "pod-configmaps-f5f65762-4fec-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.768547782s Feb 15 12:16:25.454: INFO: Pod "pod-configmaps-f5f65762-4fec-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.835275837s STEP: Saw pod success Feb 15 12:16:25.455: INFO: Pod "pod-configmaps-f5f65762-4fec-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:16:25.710: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f5f65762-4fec-11ea-960a-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 15 12:16:25.869: INFO: Waiting for pod pod-configmaps-f5f65762-4fec-11ea-960a-0242ac110007 to disappear Feb 15 12:16:25.904: INFO: Pod pod-configmaps-f5f65762-4fec-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:16:25.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-btnqt" for this suite. Feb 15 12:16:32.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:16:32.985: INFO: namespace: e2e-tests-configmap-btnqt, resource: bindings, ignored listing per whitelist Feb 15 12:16:33.000: INFO: namespace e2e-tests-configmap-btnqt deletion completed in 7.036224993s • [SLOW TEST:18.908 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:16:33.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:16:45.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-ks9cz" for this suite. Feb 15 12:16:53.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:16:53.463: INFO: namespace: e2e-tests-kubelet-test-ks9cz, resource: bindings, ignored listing per whitelist Feb 15 12:16:53.583: INFO: namespace e2e-tests-kubelet-test-ks9cz deletion completed in 8.247417993s • [SLOW TEST:20.583 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:16:53.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 15 12:16:53.940: INFO: Waiting up to 5m0s for pod "pod-0d6a4fdc-4fed-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-wq8cb" to be "success or failure" Feb 15 12:16:53.997: INFO: Pod "pod-0d6a4fdc-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 56.85404ms Feb 15 12:16:56.046: INFO: Pod "pod-0d6a4fdc-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106058402s Feb 15 12:16:58.077: INFO: Pod "pod-0d6a4fdc-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136864931s Feb 15 12:17:00.780: INFO: Pod "pod-0d6a4fdc-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.839925436s Feb 15 12:17:03.273: INFO: Pod "pod-0d6a4fdc-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.33253897s Feb 15 12:17:05.287: INFO: Pod "pod-0d6a4fdc-4fed-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.3472638s STEP: Saw pod success Feb 15 12:17:05.288: INFO: Pod "pod-0d6a4fdc-4fed-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:17:05.295: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0d6a4fdc-4fed-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 12:17:05.615: INFO: Waiting for pod pod-0d6a4fdc-4fed-11ea-960a-0242ac110007 to disappear Feb 15 12:17:05.630: INFO: Pod pod-0d6a4fdc-4fed-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:17:05.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wq8cb" for this suite. Feb 15 12:17:11.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:17:11.950: INFO: namespace: e2e-tests-emptydir-wq8cb, resource: bindings, ignored listing per whitelist Feb 15 12:17:11.999: INFO: namespace e2e-tests-emptydir-wq8cb deletion completed in 6.35692302s • [SLOW TEST:18.415 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:17:11.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-1851f0e5-4fed-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 12:17:12.330: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-18529aba-4fed-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-8jrbk" to be "success or failure" Feb 15 12:17:12.342: INFO: Pod "pod-projected-secrets-18529aba-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.856019ms Feb 15 12:17:14.351: INFO: Pod "pod-projected-secrets-18529aba-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021042706s Feb 15 12:17:16.384: INFO: Pod "pod-projected-secrets-18529aba-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054150471s Feb 15 12:17:18.577: INFO: Pod "pod-projected-secrets-18529aba-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246803932s Feb 15 12:17:20.640: INFO: Pod "pod-projected-secrets-18529aba-4fed-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.309767232s STEP: Saw pod success Feb 15 12:17:20.640: INFO: Pod "pod-projected-secrets-18529aba-4fed-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:17:20.892: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-18529aba-4fed-11ea-960a-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Feb 15 12:17:21.100: INFO: Waiting for pod pod-projected-secrets-18529aba-4fed-11ea-960a-0242ac110007 to disappear Feb 15 12:17:21.252: INFO: Pod pod-projected-secrets-18529aba-4fed-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:17:21.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8jrbk" for this suite. Feb 15 12:17:29.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:17:29.482: INFO: namespace: e2e-tests-projected-8jrbk, resource: bindings, ignored listing per whitelist Feb 15 12:17:29.498: INFO: namespace e2e-tests-projected-8jrbk deletion completed in 8.238062803s • [SLOW TEST:17.499 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:17:29.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-22bcc608-4fed-11ea-960a-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 15 12:17:29.731: INFO: Waiting up to 5m0s for pod "pod-configmaps-22bf6f85-4fed-11ea-960a-0242ac110007" in namespace "e2e-tests-configmap-rlfjt" to be "success or failure" Feb 15 12:17:29.848: INFO: Pod "pod-configmaps-22bf6f85-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 116.904617ms Feb 15 12:17:31.915: INFO: Pod "pod-configmaps-22bf6f85-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184406267s Feb 15 12:17:33.995: INFO: Pod "pod-configmaps-22bf6f85-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264339037s Feb 15 12:17:36.011: INFO: Pod "pod-configmaps-22bf6f85-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280081798s Feb 15 12:17:38.032: INFO: Pod "pod-configmaps-22bf6f85-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.300666367s Feb 15 12:17:40.053: INFO: Pod "pod-configmaps-22bf6f85-4fed-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.322035792s STEP: Saw pod success Feb 15 12:17:40.053: INFO: Pod "pod-configmaps-22bf6f85-4fed-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:17:40.060: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-22bf6f85-4fed-11ea-960a-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 15 12:17:40.255: INFO: Waiting for pod pod-configmaps-22bf6f85-4fed-11ea-960a-0242ac110007 to disappear Feb 15 12:17:40.282: INFO: Pod pod-configmaps-22bf6f85-4fed-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:17:40.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rlfjt" for this suite. Feb 15 12:17:46.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:17:46.428: INFO: namespace: e2e-tests-configmap-rlfjt, resource: bindings, ignored listing per whitelist Feb 15 12:17:46.554: INFO: namespace e2e-tests-configmap-rlfjt deletion completed in 6.257884257s • [SLOW TEST:17.057 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:17:46.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 12:17:46.906: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2cfb27dc-4fed-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-7mrpt" to be "success or failure" Feb 15 12:17:47.026: INFO: Pod "downwardapi-volume-2cfb27dc-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 118.813259ms Feb 15 12:17:49.125: INFO: Pod "downwardapi-volume-2cfb27dc-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218505224s Feb 15 12:17:51.160: INFO: Pod "downwardapi-volume-2cfb27dc-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253009455s Feb 15 12:17:53.173: INFO: Pod "downwardapi-volume-2cfb27dc-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.266313535s Feb 15 12:17:55.190: INFO: Pod "downwardapi-volume-2cfb27dc-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28317222s Feb 15 12:17:57.207: INFO: Pod "downwardapi-volume-2cfb27dc-4fed-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.300190242s STEP: Saw pod success Feb 15 12:17:57.207: INFO: Pod "downwardapi-volume-2cfb27dc-4fed-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:17:57.216: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2cfb27dc-4fed-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 12:17:57.324: INFO: Waiting for pod downwardapi-volume-2cfb27dc-4fed-11ea-960a-0242ac110007 to disappear Feb 15 12:17:57.396: INFO: Pod downwardapi-volume-2cfb27dc-4fed-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:17:57.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7mrpt" for this suite. Feb 15 12:18:03.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:18:03.576: INFO: namespace: e2e-tests-projected-7mrpt, resource: bindings, ignored listing per whitelist Feb 15 12:18:03.964: INFO: namespace e2e-tests-projected-7mrpt deletion completed in 6.549859087s • [SLOW TEST:17.409 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:18:03.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 15 12:18:04.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-2dtz2' Feb 15 12:18:08.228: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 15 12:18:08.228: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Feb 15 12:18:10.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-2dtz2' Feb 15 12:18:10.532: INFO: stderr: "" Feb 15 12:18:10.532: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:18:10.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2dtz2" for this suite. Feb 15 12:18:16.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:18:16.801: INFO: namespace: e2e-tests-kubectl-2dtz2, resource: bindings, ignored listing per whitelist Feb 15 12:18:16.872: INFO: namespace e2e-tests-kubectl-2dtz2 deletion completed in 6.285232848s • [SLOW TEST:12.908 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:18:16.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-3f060099-4fed-11ea-960a-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 15 12:18:17.247: INFO: Waiting up to 5m0s for pod "pod-configmaps-3f071cbf-4fed-11ea-960a-0242ac110007" in namespace "e2e-tests-configmap-pjb42" to be "success or failure" Feb 15 12:18:17.258: INFO: Pod "pod-configmaps-3f071cbf-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.113195ms Feb 15 12:18:19.322: INFO: Pod "pod-configmaps-3f071cbf-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074531675s Feb 15 12:18:21.366: INFO: Pod "pod-configmaps-3f071cbf-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119402491s Feb 15 12:18:23.392: INFO: Pod "pod-configmaps-3f071cbf-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144777603s Feb 15 12:18:25.417: INFO: Pod "pod-configmaps-3f071cbf-4fed-11ea-960a-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 8.169832378s Feb 15 12:18:27.510: INFO: Pod "pod-configmaps-3f071cbf-4fed-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.263073776s STEP: Saw pod success Feb 15 12:18:27.510: INFO: Pod "pod-configmaps-3f071cbf-4fed-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:18:27.526: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3f071cbf-4fed-11ea-960a-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 15 12:18:27.689: INFO: Waiting for pod pod-configmaps-3f071cbf-4fed-11ea-960a-0242ac110007 to disappear Feb 15 12:18:27.705: INFO: Pod pod-configmaps-3f071cbf-4fed-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:18:27.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pjb42" for this suite. Feb 15 12:18:35.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:18:35.959: INFO: namespace: e2e-tests-configmap-pjb42, resource: bindings, ignored listing per whitelist Feb 15 12:18:36.022: INFO: namespace e2e-tests-configmap-pjb42 deletion completed in 8.306113768s • [SLOW TEST:19.149 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:18:36.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-4a59c29c-4fed-11ea-960a-0242ac110007 STEP: Creating a pod to test consume secrets Feb 15 12:18:36.280: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4a5b46ae-4fed-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-mvmdl" to be "success or failure" Feb 15 12:18:36.299: INFO: Pod "pod-projected-secrets-4a5b46ae-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.853322ms Feb 15 12:18:38.317: INFO: Pod "pod-projected-secrets-4a5b46ae-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03694114s Feb 15 12:18:40.348: INFO: Pod "pod-projected-secrets-4a5b46ae-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068395075s Feb 15 12:18:42.361: INFO: Pod "pod-projected-secrets-4a5b46ae-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081039275s Feb 15 12:18:44.374: INFO: Pod "pod-projected-secrets-4a5b46ae-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093713577s Feb 15 12:18:46.387: INFO: Pod "pod-projected-secrets-4a5b46ae-4fed-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106768132s STEP: Saw pod success Feb 15 12:18:46.387: INFO: Pod "pod-projected-secrets-4a5b46ae-4fed-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:18:46.393: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4a5b46ae-4fed-11ea-960a-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Feb 15 12:18:47.108: INFO: Waiting for pod pod-projected-secrets-4a5b46ae-4fed-11ea-960a-0242ac110007 to disappear Feb 15 12:18:47.306: INFO: Pod pod-projected-secrets-4a5b46ae-4fed-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:18:47.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mvmdl" for this suite. Feb 15 12:18:53.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:18:53.541: INFO: namespace: e2e-tests-projected-mvmdl, resource: bindings, ignored listing per whitelist Feb 15 12:18:53.667: INFO: namespace e2e-tests-projected-mvmdl deletion completed in 6.345645935s • [SLOW TEST:17.645 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:18:53.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 15 12:18:54.011: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54efe870-4fed-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-jdv4z" to be "success or failure" Feb 15 12:18:54.032: INFO: Pod "downwardapi-volume-54efe870-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 20.999668ms Feb 15 12:18:56.052: INFO: Pod "downwardapi-volume-54efe870-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040891849s Feb 15 12:18:58.068: INFO: Pod "downwardapi-volume-54efe870-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056646365s Feb 15 12:19:00.160: INFO: Pod "downwardapi-volume-54efe870-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148383936s Feb 15 12:19:02.167: INFO: Pod "downwardapi-volume-54efe870-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155890645s Feb 15 12:19:04.358: INFO: Pod "downwardapi-volume-54efe870-4fed-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.346563669s STEP: Saw pod success Feb 15 12:19:04.358: INFO: Pod "downwardapi-volume-54efe870-4fed-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:19:04.368: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-54efe870-4fed-11ea-960a-0242ac110007 container client-container: STEP: delete the pod Feb 15 12:19:04.699: INFO: Waiting for pod downwardapi-volume-54efe870-4fed-11ea-960a-0242ac110007 to disappear Feb 15 12:19:04.866: INFO: Pod downwardapi-volume-54efe870-4fed-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:19:04.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jdv4z" for this suite. Feb 15 12:19:10.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:19:11.052: INFO: namespace: e2e-tests-projected-jdv4z, resource: bindings, ignored listing per whitelist Feb 15 12:19:11.259: INFO: namespace e2e-tests-projected-jdv4z deletion completed in 6.37288826s • [SLOW TEST:17.591 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:19:11.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:19:21.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-4mg6j" for this suite. Feb 15 12:20:15.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:20:17.893: INFO: namespace: e2e-tests-kubelet-test-4mg6j, resource: bindings, ignored listing per whitelist Feb 15 12:20:17.984: INFO: namespace e2e-tests-kubelet-test-4mg6j deletion completed in 56.451986401s • [SLOW TEST:66.725 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:20:17.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 15 12:20:18.316: INFO: Waiting up to 5m0s for pod "pod-873cc037-4fed-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-wh6rd" to be "success or failure" Feb 15 12:20:18.430: INFO: Pod "pod-873cc037-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 113.67406ms Feb 15 12:20:20.673: INFO: Pod "pod-873cc037-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.356986735s Feb 15 12:20:22.711: INFO: Pod "pod-873cc037-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394722268s Feb 15 12:20:25.037: INFO: Pod "pod-873cc037-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.720914403s Feb 15 12:20:27.504: INFO: Pod "pod-873cc037-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.18805323s Feb 15 12:20:29.680: INFO: Pod "pod-873cc037-4fed-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.36374505s STEP: Saw pod success Feb 15 12:20:29.680: INFO: Pod "pod-873cc037-4fed-11ea-960a-0242ac110007" satisfied condition "success or failure" Feb 15 12:20:29.692: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-873cc037-4fed-11ea-960a-0242ac110007 container test-container: STEP: delete the pod Feb 15 12:20:29.834: INFO: Waiting for pod pod-873cc037-4fed-11ea-960a-0242ac110007 to disappear Feb 15 12:20:29.854: INFO: Pod pod-873cc037-4fed-11ea-960a-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:20:29.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wh6rd" for this suite. Feb 15 12:20:37.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:20:38.033: INFO: namespace: e2e-tests-emptydir-wh6rd, resource: bindings, ignored listing per whitelist Feb 15 12:20:38.142: INFO: namespace e2e-tests-emptydir-wh6rd deletion completed in 8.274467875s • [SLOW TEST:20.158 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:20:38.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Feb 15 12:20:38.289: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 15 12:20:38.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:20:38.983: INFO: stderr: "" Feb 15 12:20:38.983: INFO: stdout: "service/redis-slave created\n" Feb 15 12:20:38.985: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 15 12:20:38.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:20:39.376: INFO: stderr: "" Feb 15 12:20:39.376: INFO: stdout: "service/redis-master created\n" Feb 15 12:20:39.377: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 15 12:20:39.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:20:39.849: INFO: stderr: "" Feb 15 12:20:39.849: INFO: stdout: "service/frontend created\n" Feb 15 12:20:39.851: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 15 12:20:39.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:20:40.281: INFO: stderr: "" Feb 15 12:20:40.282: INFO: stdout: "deployment.extensions/frontend created\n" Feb 15 12:20:40.283: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 15 12:20:40.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:20:40.985: INFO: stderr: "" Feb 15 12:20:40.985: INFO: stdout: "deployment.extensions/redis-master created\n" Feb 15 12:20:40.987: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 15 12:20:40.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:20:41.764: INFO: stderr: "" Feb 15 12:20:41.765: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Feb 15 12:20:41.765: INFO: Waiting for all frontend pods to be Running. Feb 15 12:21:11.822: INFO: Waiting for frontend to serve content. Feb 15 12:21:12.108: INFO: Trying to add a new entry to the guestbook. Feb 15 12:21:12.269: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 15 12:21:12.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:21:12.672: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 12:21:12.673: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 15 12:21:12.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:21:12.833: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 12:21:12.833: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 15 12:21:12.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:21:13.162: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 12:21:13.162: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 15 12:21:13.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:21:13.323: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 12:21:13.324: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 15 12:21:13.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:21:13.552: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 12:21:13.553: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 15 12:21:13.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qnp9t' Feb 15 12:21:13.943: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 12:21:13.944: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:21:13.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qnp9t" for this suite. Feb 15 12:22:06.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:22:06.336: INFO: namespace: e2e-tests-kubectl-qnp9t, resource: bindings, ignored listing per whitelist Feb 15 12:22:06.376: INFO: namespace e2e-tests-kubectl-qnp9t deletion completed in 52.349365433s • [SLOW TEST:88.232 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:22:06.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 15 12:22:17.431: INFO: Successfully updated pod "annotationupdatec7d8e2ef-4fed-11ea-960a-0242ac110007" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 15 12:22:19.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k5cd2" for this suite. Feb 15 12:22:43.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:22:44.016: INFO: namespace: e2e-tests-projected-k5cd2, resource: bindings, ignored listing per whitelist Feb 15 12:22:44.063: INFO: namespace e2e-tests-projected-k5cd2 deletion completed in 24.547341207s • [SLOW TEST:37.688 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 15 12:22:44.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 15 12:22:44.354: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 18.809135ms)
Feb 15 12:22:44.366: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.69348ms)
Feb 15 12:22:44.397: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 30.810713ms)
Feb 15 12:22:44.404: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.028304ms)
Feb 15 12:22:44.409: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.651238ms)
Feb 15 12:22:44.413: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.247911ms)
Feb 15 12:22:44.417: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.984472ms)
Feb 15 12:22:44.421: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.526136ms)
Feb 15 12:22:44.427: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.270984ms)
Feb 15 12:22:44.431: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.333413ms)
Feb 15 12:22:44.442: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.679771ms)
Feb 15 12:22:44.446: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.777087ms)
Feb 15 12:22:44.450: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.096178ms)
Feb 15 12:22:44.460: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.291196ms)
Feb 15 12:22:44.467: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.370535ms)
Feb 15 12:22:44.471: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.90217ms)
Feb 15 12:22:44.477: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.780186ms)
Feb 15 12:22:44.485: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.623601ms)
Feb 15 12:22:44.496: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.049765ms)
Feb 15 12:22:44.504: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.934393ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:22:44.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-qt5k6" for this suite.
Feb 15 12:22:50.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:22:50.753: INFO: namespace: e2e-tests-proxy-qt5k6, resource: bindings, ignored listing per whitelist
Feb 15 12:22:50.816: INFO: namespace e2e-tests-proxy-qt5k6 deletion completed in 6.303747577s

• [SLOW TEST:6.753 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:22:50.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 15 12:23:01.625: INFO: Successfully updated pod "pod-update-e243e9bc-4fed-11ea-960a-0242ac110007"
STEP: verifying the updated pod is in kubernetes
Feb 15 12:23:01.708: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:23:01.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vwvgw" for this suite.
Feb 15 12:23:25.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:23:25.897: INFO: namespace: e2e-tests-pods-vwvgw, resource: bindings, ignored listing per whitelist
Feb 15 12:23:25.941: INFO: namespace e2e-tests-pods-vwvgw deletion completed in 24.227374268s

• [SLOW TEST:35.124 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:23:25.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-f7359fd8-4fed-11ea-960a-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 15 12:23:26.193: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-nmj7j" to be "success or failure"
Feb 15 12:23:26.310: INFO: Pod "pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 116.568049ms
Feb 15 12:23:28.326: INFO: Pod "pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133236636s
Feb 15 12:23:30.352: INFO: Pod "pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159319649s
Feb 15 12:23:33.439: INFO: Pod "pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.245592075s
Feb 15 12:23:35.469: INFO: Pod "pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.275902819s
Feb 15 12:23:37.486: INFO: Pod "pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.292456064s
Feb 15 12:23:39.496: INFO: Pod "pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.302538736s
STEP: Saw pod success
Feb 15 12:23:39.496: INFO: Pod "pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 12:23:39.501: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 12:23:39.793: INFO: Waiting for pod pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007 to disappear
Feb 15 12:23:39.805: INFO: Pod pod-projected-configmaps-f7380b96-4fed-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:23:39.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nmj7j" for this suite.
Feb 15 12:23:45.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:23:46.120: INFO: namespace: e2e-tests-projected-nmj7j, resource: bindings, ignored listing per whitelist
Feb 15 12:23:46.193: INFO: namespace e2e-tests-projected-nmj7j deletion completed in 6.374608464s

• [SLOW TEST:20.251 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:23:46.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-vbdpv
Feb 15 12:23:56.668: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-vbdpv
STEP: checking the pod's current state and verifying that restartCount is present
Feb 15 12:23:56.676: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:27:58.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-vbdpv" for this suite.
Feb 15 12:28:06.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:28:06.879: INFO: namespace: e2e-tests-container-probe-vbdpv, resource: bindings, ignored listing per whitelist
Feb 15 12:28:06.908: INFO: namespace e2e-tests-container-probe-vbdpv deletion completed in 8.226882947s

• [SLOW TEST:260.715 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:28:06.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 15 12:28:07.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:28:17.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7llxq" for this suite.
Feb 15 12:29:03.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:29:03.644: INFO: namespace: e2e-tests-pods-7llxq, resource: bindings, ignored listing per whitelist
Feb 15 12:29:03.692: INFO: namespace e2e-tests-pods-7llxq deletion completed in 46.236581077s

• [SLOW TEST:56.783 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:29:03.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 15 12:29:04.040: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:29:21.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-4m6pf" for this suite.
Feb 15 12:29:29.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:29:29.588: INFO: namespace: e2e-tests-init-container-4m6pf, resource: bindings, ignored listing per whitelist
Feb 15 12:29:29.746: INFO: namespace e2e-tests-init-container-4m6pf deletion completed in 8.240318205s

• [SLOW TEST:26.053 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:29:29.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 15 12:29:30.311: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.786556ms)
Feb 15 12:29:30.320: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.919502ms)
Feb 15 12:29:30.332: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.631677ms)
Feb 15 12:29:30.342: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.847339ms)
Feb 15 12:29:30.350: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.155636ms)
Feb 15 12:29:30.357: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.288968ms)
Feb 15 12:29:30.365: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.948607ms)
Feb 15 12:29:30.411: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 45.826579ms)
Feb 15 12:29:30.418: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.906964ms)
Feb 15 12:29:30.425: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.663253ms)
Feb 15 12:29:30.433: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.592146ms)
Feb 15 12:29:30.441: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.385572ms)
Feb 15 12:29:30.453: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.031848ms)
Feb 15 12:29:30.462: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.735265ms)
Feb 15 12:29:30.476: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.303541ms)
Feb 15 12:29:30.491: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.704984ms)
Feb 15 12:29:30.504: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.826992ms)
Feb 15 12:29:30.514: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.930657ms)
Feb 15 12:29:30.527: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.772828ms)
Feb 15 12:29:30.540: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.1729ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:29:30.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-bc56w" for this suite.
Feb 15 12:29:36.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:29:36.815: INFO: namespace: e2e-tests-proxy-bc56w, resource: bindings, ignored listing per whitelist
Feb 15 12:29:36.831: INFO: namespace e2e-tests-proxy-bc56w deletion completed in 6.279363586s

• [SLOW TEST:7.084 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:29:36.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 15 12:29:37.097: INFO: Waiting up to 5m0s for pod "pod-d44b8b08-4fee-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-9m98r" to be "success or failure"
Feb 15 12:29:37.145: INFO: Pod "pod-d44b8b08-4fee-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 47.916018ms
Feb 15 12:29:39.549: INFO: Pod "pod-d44b8b08-4fee-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.451152912s
Feb 15 12:29:41.571: INFO: Pod "pod-d44b8b08-4fee-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.473453654s
Feb 15 12:29:43.613: INFO: Pod "pod-d44b8b08-4fee-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.515954701s
Feb 15 12:29:45.626: INFO: Pod "pod-d44b8b08-4fee-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528920893s
Feb 15 12:29:47.638: INFO: Pod "pod-d44b8b08-4fee-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.540689523s
STEP: Saw pod success
Feb 15 12:29:47.638: INFO: Pod "pod-d44b8b08-4fee-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 12:29:47.643: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d44b8b08-4fee-11ea-960a-0242ac110007 container test-container: 
STEP: delete the pod
Feb 15 12:29:48.330: INFO: Waiting for pod pod-d44b8b08-4fee-11ea-960a-0242ac110007 to disappear
Feb 15 12:29:48.438: INFO: Pod pod-d44b8b08-4fee-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:29:48.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9m98r" for this suite.
Feb 15 12:29:56.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:29:56.623: INFO: namespace: e2e-tests-emptydir-9m98r, resource: bindings, ignored listing per whitelist
Feb 15 12:29:56.865: INFO: namespace e2e-tests-emptydir-9m98r deletion completed in 8.41362657s

• [SLOW TEST:20.034 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:29:56.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:31:02.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-csqkk" for this suite.
Feb 15 12:31:08.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:31:08.995: INFO: namespace: e2e-tests-container-runtime-csqkk, resource: bindings, ignored listing per whitelist
Feb 15 12:31:09.147: INFO: namespace e2e-tests-container-runtime-csqkk deletion completed in 6.368116065s

• [SLOW TEST:72.280 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:31:09.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 15 12:31:19.996: INFO: Waiting up to 5m0s for pod "client-envvars-118dab9b-4fef-11ea-960a-0242ac110007" in namespace "e2e-tests-pods-4psn6" to be "success or failure"
Feb 15 12:31:20.042: INFO: Pod "client-envvars-118dab9b-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 45.754275ms
Feb 15 12:31:22.234: INFO: Pod "client-envvars-118dab9b-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237531326s
Feb 15 12:31:24.257: INFO: Pod "client-envvars-118dab9b-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260531877s
Feb 15 12:31:26.655: INFO: Pod "client-envvars-118dab9b-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659172887s
Feb 15 12:31:28.676: INFO: Pod "client-envvars-118dab9b-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.6799495s
Feb 15 12:31:31.212: INFO: Pod "client-envvars-118dab9b-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.216215687s
Feb 15 12:31:33.235: INFO: Pod "client-envvars-118dab9b-4fef-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.239316449s
STEP: Saw pod success
Feb 15 12:31:33.236: INFO: Pod "client-envvars-118dab9b-4fef-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 12:31:33.246: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-118dab9b-4fef-11ea-960a-0242ac110007 container env3cont: 
STEP: delete the pod
Feb 15 12:31:33.350: INFO: Waiting for pod client-envvars-118dab9b-4fef-11ea-960a-0242ac110007 to disappear
Feb 15 12:31:33.360: INFO: Pod client-envvars-118dab9b-4fef-11ea-960a-0242ac110007 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:31:33.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4psn6" for this suite.
Feb 15 12:32:17.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:32:17.497: INFO: namespace: e2e-tests-pods-4psn6, resource: bindings, ignored listing per whitelist
Feb 15 12:32:17.572: INFO: namespace e2e-tests-pods-4psn6 deletion completed in 44.16542845s

• [SLOW TEST:68.424 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:32:17.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 15 12:32:18.057: INFO: PodSpec: initContainers in spec.initContainers
Feb 15 12:33:33.529: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-343f78b3-4fef-11ea-960a-0242ac110007", GenerateName:"", Namespace:"e2e-tests-init-container-cktmf", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-cktmf/pods/pod-init-343f78b3-4fef-11ea-960a-0242ac110007", UID:"344056e4-4fef-11ea-a994-fa163e34d433", ResourceVersion:"21757419", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717366738, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"57465417"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5qn47", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001922080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5qn47", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5qn47", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5qn47", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00052d458), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001681e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0004b6410)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0004b6460)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0004b6468), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0004b646c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717366738, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717366738, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717366738, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717366738, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00102e220), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001a1d5e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001a1d650)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://b624b4d614838601a61ebff787a1916443968cc5a813f094f996241a0430c508"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00102e2c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00102e240), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:33:33.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-cktmf" for this suite.
Feb 15 12:33:57.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:33:57.881: INFO: namespace: e2e-tests-init-container-cktmf, resource: bindings, ignored listing per whitelist
Feb 15 12:33:57.953: INFO: namespace e2e-tests-init-container-cktmf deletion completed in 24.340183954s

• [SLOW TEST:100.381 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:33:57.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-dm5w
STEP: Creating a pod to test atomic-volume-subpath
Feb 15 12:33:58.218: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dm5w" in namespace "e2e-tests-subpath-s6b7x" to be "success or failure"
Feb 15 12:33:58.299: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Pending", Reason="", readiness=false. Elapsed: 80.90462ms
Feb 15 12:34:00.313: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094733272s
Feb 15 12:34:02.324: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105763592s
Feb 15 12:34:04.339: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120587404s
Feb 15 12:34:07.185: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.966092378s
Feb 15 12:34:09.206: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.987502224s
Feb 15 12:34:11.429: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Pending", Reason="", readiness=false. Elapsed: 13.209963009s
Feb 15 12:34:13.442: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Pending", Reason="", readiness=false. Elapsed: 15.222992471s
Feb 15 12:34:15.456: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Running", Reason="", readiness=false. Elapsed: 17.237320612s
Feb 15 12:34:17.473: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Running", Reason="", readiness=false. Elapsed: 19.25463986s
Feb 15 12:34:19.490: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Running", Reason="", readiness=false. Elapsed: 21.271870925s
Feb 15 12:34:21.511: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Running", Reason="", readiness=false. Elapsed: 23.292637295s
Feb 15 12:34:23.532: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Running", Reason="", readiness=false. Elapsed: 25.313761197s
Feb 15 12:34:25.554: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Running", Reason="", readiness=false. Elapsed: 27.335795283s
Feb 15 12:34:27.568: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Running", Reason="", readiness=false. Elapsed: 29.349629699s
Feb 15 12:34:29.584: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Running", Reason="", readiness=false. Elapsed: 31.364924856s
Feb 15 12:34:31.601: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Running", Reason="", readiness=false. Elapsed: 33.382364924s
Feb 15 12:34:33.620: INFO: Pod "pod-subpath-test-configmap-dm5w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.401310285s
STEP: Saw pod success
Feb 15 12:34:33.620: INFO: Pod "pod-subpath-test-configmap-dm5w" satisfied condition "success or failure"
Feb 15 12:34:33.641: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-dm5w container test-container-subpath-configmap-dm5w: 
STEP: delete the pod
Feb 15 12:34:33.728: INFO: Waiting for pod pod-subpath-test-configmap-dm5w to disappear
Feb 15 12:34:33.745: INFO: Pod pod-subpath-test-configmap-dm5w no longer exists
STEP: Deleting pod pod-subpath-test-configmap-dm5w
Feb 15 12:34:33.745: INFO: Deleting pod "pod-subpath-test-configmap-dm5w" in namespace "e2e-tests-subpath-s6b7x"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:34:33.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-s6b7x" for this suite.
Feb 15 12:34:39.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:34:40.065: INFO: namespace: e2e-tests-subpath-s6b7x, resource: bindings, ignored listing per whitelist
Feb 15 12:34:40.144: INFO: namespace e2e-tests-subpath-s6b7x deletion completed in 6.383052457s

• [SLOW TEST:42.191 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:34:40.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-wf42c
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 15 12:34:40.385: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 15 12:35:12.691: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-wf42c PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:35:12.691: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:35:12.765955       8 log.go:172] (0xc00099d760) (0xc0025d85a0) Create stream
I0215 12:35:12.766149       8 log.go:172] (0xc00099d760) (0xc0025d85a0) Stream added, broadcasting: 1
I0215 12:35:12.771218       8 log.go:172] (0xc00099d760) Reply frame received for 1
I0215 12:35:12.771347       8 log.go:172] (0xc00099d760) (0xc001d1c1e0) Create stream
I0215 12:35:12.771361       8 log.go:172] (0xc00099d760) (0xc001d1c1e0) Stream added, broadcasting: 3
I0215 12:35:12.773127       8 log.go:172] (0xc00099d760) Reply frame received for 3
I0215 12:35:12.773150       8 log.go:172] (0xc00099d760) (0xc0025d8640) Create stream
I0215 12:35:12.773159       8 log.go:172] (0xc00099d760) (0xc0025d8640) Stream added, broadcasting: 5
I0215 12:35:12.774211       8 log.go:172] (0xc00099d760) Reply frame received for 5
I0215 12:35:12.939110       8 log.go:172] (0xc00099d760) Data frame received for 3
I0215 12:35:12.939253       8 log.go:172] (0xc001d1c1e0) (3) Data frame handling
I0215 12:35:12.939282       8 log.go:172] (0xc001d1c1e0) (3) Data frame sent
I0215 12:35:13.085789       8 log.go:172] (0xc00099d760) (0xc0025d8640) Stream removed, broadcasting: 5
I0215 12:35:13.086298       8 log.go:172] (0xc00099d760) Data frame received for 1
I0215 12:35:13.086718       8 log.go:172] (0xc00099d760) (0xc001d1c1e0) Stream removed, broadcasting: 3
I0215 12:35:13.086941       8 log.go:172] (0xc0025d85a0) (1) Data frame handling
I0215 12:35:13.086990       8 log.go:172] (0xc0025d85a0) (1) Data frame sent
I0215 12:35:13.087028       8 log.go:172] (0xc00099d760) (0xc0025d85a0) Stream removed, broadcasting: 1
I0215 12:35:13.087050       8 log.go:172] (0xc00099d760) Go away received
I0215 12:35:13.088120       8 log.go:172] (0xc00099d760) (0xc0025d85a0) Stream removed, broadcasting: 1
I0215 12:35:13.088165       8 log.go:172] (0xc00099d760) (0xc001d1c1e0) Stream removed, broadcasting: 3
I0215 12:35:13.088209       8 log.go:172] (0xc00099d760) (0xc0025d8640) Stream removed, broadcasting: 5
Feb 15 12:35:13.088: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:35:13.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-wf42c" for this suite.
Feb 15 12:35:37.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:35:37.308: INFO: namespace: e2e-tests-pod-network-test-wf42c, resource: bindings, ignored listing per whitelist
Feb 15 12:35:37.397: INFO: namespace e2e-tests-pod-network-test-wf42c deletion completed in 24.292076793s

• [SLOW TEST:57.253 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:35:37.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 15 12:35:37.647: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-f649v" to be "success or failure"
Feb 15 12:35:37.709: INFO: Pod "downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 61.906739ms
Feb 15 12:35:39.725: INFO: Pod "downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077129428s
Feb 15 12:35:42.357: INFO: Pod "downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.709546433s
Feb 15 12:35:44.372: INFO: Pod "downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.724555642s
Feb 15 12:35:46.659: INFO: Pod "downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.011073523s
Feb 15 12:35:48.709: INFO: Pod "downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.061966751s
Feb 15 12:35:50.744: INFO: Pod "downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.096050795s
STEP: Saw pod success
Feb 15 12:35:50.744: INFO: Pod "downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 12:35:50.763: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007 container client-container: 
STEP: delete the pod
Feb 15 12:35:50.978: INFO: Waiting for pod downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007 to disappear
Feb 15 12:35:51.005: INFO: Pod downwardapi-volume-ab309483-4fef-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:35:51.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-f649v" for this suite.
Feb 15 12:35:57.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:35:57.347: INFO: namespace: e2e-tests-downward-api-f649v, resource: bindings, ignored listing per whitelist
Feb 15 12:35:57.363: INFO: namespace e2e-tests-downward-api-f649v deletion completed in 6.346822351s

• [SLOW TEST:19.965 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:35:57.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 15 12:35:57.541: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-a,UID:b70f77b3-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757725,Generation:0,CreationTimestamp:2020-02-15 12:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 15 12:35:57.542: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-a,UID:b70f77b3-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757725,Generation:0,CreationTimestamp:2020-02-15 12:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 15 12:36:07.564: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-a,UID:b70f77b3-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757738,Generation:0,CreationTimestamp:2020-02-15 12:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 15 12:36:07.565: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-a,UID:b70f77b3-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757738,Generation:0,CreationTimestamp:2020-02-15 12:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 15 12:36:17.619: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-a,UID:b70f77b3-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757750,Generation:0,CreationTimestamp:2020-02-15 12:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 15 12:36:17.620: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-a,UID:b70f77b3-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757750,Generation:0,CreationTimestamp:2020-02-15 12:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 15 12:36:27.640: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-a,UID:b70f77b3-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757763,Generation:0,CreationTimestamp:2020-02-15 12:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 15 12:36:27.640: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-a,UID:b70f77b3-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757763,Generation:0,CreationTimestamp:2020-02-15 12:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 15 12:36:37.671: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-b,UID:cef9cfaf-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757775,Generation:0,CreationTimestamp:2020-02-15 12:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 15 12:36:37.672: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-b,UID:cef9cfaf-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757775,Generation:0,CreationTimestamp:2020-02-15 12:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 15 12:36:47.700: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-b,UID:cef9cfaf-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757788,Generation:0,CreationTimestamp:2020-02-15 12:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 15 12:36:47.700: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-pf4zf,SelfLink:/api/v1/namespaces/e2e-tests-watch-pf4zf/configmaps/e2e-watch-test-configmap-b,UID:cef9cfaf-4fef-11ea-a994-fa163e34d433,ResourceVersion:21757788,Generation:0,CreationTimestamp:2020-02-15 12:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:36:57.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-pf4zf" for this suite.
Feb 15 12:37:03.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:37:04.340: INFO: namespace: e2e-tests-watch-pf4zf, resource: bindings, ignored listing per whitelist
Feb 15 12:37:04.402: INFO: namespace e2e-tests-watch-pf4zf deletion completed in 6.67470315s

• [SLOW TEST:67.039 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:37:04.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-jjxqx
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 15 12:37:04.682: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 15 12:37:44.966: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-jjxqx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:37:44.966: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:37:45.088961       8 log.go:172] (0xc000dbc4d0) (0xc001dae000) Create stream
I0215 12:37:45.089732       8 log.go:172] (0xc000dbc4d0) (0xc001dae000) Stream added, broadcasting: 1
I0215 12:37:45.101468       8 log.go:172] (0xc000dbc4d0) Reply frame received for 1
I0215 12:37:45.101518       8 log.go:172] (0xc000dbc4d0) (0xc001b263c0) Create stream
I0215 12:37:45.101530       8 log.go:172] (0xc000dbc4d0) (0xc001b263c0) Stream added, broadcasting: 3
I0215 12:37:45.102959       8 log.go:172] (0xc000dbc4d0) Reply frame received for 3
I0215 12:37:45.102987       8 log.go:172] (0xc000dbc4d0) (0xc001b26460) Create stream
I0215 12:37:45.102995       8 log.go:172] (0xc000dbc4d0) (0xc001b26460) Stream added, broadcasting: 5
I0215 12:37:45.104458       8 log.go:172] (0xc000dbc4d0) Reply frame received for 5
I0215 12:37:45.262073       8 log.go:172] (0xc000dbc4d0) Data frame received for 3
I0215 12:37:45.262244       8 log.go:172] (0xc001b263c0) (3) Data frame handling
I0215 12:37:45.262282       8 log.go:172] (0xc001b263c0) (3) Data frame sent
I0215 12:37:45.411088       8 log.go:172] (0xc000dbc4d0) (0xc001b263c0) Stream removed, broadcasting: 3
I0215 12:37:45.411384       8 log.go:172] (0xc000dbc4d0) Data frame received for 1
I0215 12:37:45.411433       8 log.go:172] (0xc001dae000) (1) Data frame handling
I0215 12:37:45.411480       8 log.go:172] (0xc001dae000) (1) Data frame sent
I0215 12:37:45.411581       8 log.go:172] (0xc000dbc4d0) (0xc001dae000) Stream removed, broadcasting: 1
I0215 12:37:45.411967       8 log.go:172] (0xc000dbc4d0) (0xc001b26460) Stream removed, broadcasting: 5
I0215 12:37:45.412048       8 log.go:172] (0xc000dbc4d0) (0xc001dae000) Stream removed, broadcasting: 1
I0215 12:37:45.412065       8 log.go:172] (0xc000dbc4d0) (0xc001b263c0) Stream removed, broadcasting: 3
I0215 12:37:45.412073       8 log.go:172] (0xc000dbc4d0) (0xc001b26460) Stream removed, broadcasting: 5
I0215 12:37:45.412263       8 log.go:172] (0xc000dbc4d0) Go away received
Feb 15 12:37:45.412: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:37:45.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-jjxqx" for this suite.
Feb 15 12:38:09.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:38:09.521: INFO: namespace: e2e-tests-pod-network-test-jjxqx, resource: bindings, ignored listing per whitelist
Feb 15 12:38:09.640: INFO: namespace e2e-tests-pod-network-test-jjxqx deletion completed in 24.203003815s

• [SLOW TEST:65.238 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:38:09.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-05f9aad4-4ff0-11ea-960a-0242ac110007
Feb 15 12:38:09.937: INFO: Pod name my-hostname-basic-05f9aad4-4ff0-11ea-960a-0242ac110007: Found 0 pods out of 1
Feb 15 12:38:15.453: INFO: Pod name my-hostname-basic-05f9aad4-4ff0-11ea-960a-0242ac110007: Found 1 pods out of 1
Feb 15 12:38:15.453: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-05f9aad4-4ff0-11ea-960a-0242ac110007" are running
Feb 15 12:38:19.652: INFO: Pod "my-hostname-basic-05f9aad4-4ff0-11ea-960a-0242ac110007-drdtr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 12:38:10 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 12:38:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-05f9aad4-4ff0-11ea-960a-0242ac110007]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 12:38:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-05f9aad4-4ff0-11ea-960a-0242ac110007]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 12:38:09 +0000 UTC Reason: Message:}])
Feb 15 12:38:19.653: INFO: Trying to dial the pod
Feb 15 12:38:24.693: INFO: Controller my-hostname-basic-05f9aad4-4ff0-11ea-960a-0242ac110007: Got expected result from replica 1 [my-hostname-basic-05f9aad4-4ff0-11ea-960a-0242ac110007-drdtr]: "my-hostname-basic-05f9aad4-4ff0-11ea-960a-0242ac110007-drdtr", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:38:24.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-bjscs" for this suite.
Feb 15 12:38:30.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:38:30.932: INFO: namespace: e2e-tests-replication-controller-bjscs, resource: bindings, ignored listing per whitelist
Feb 15 12:38:30.966: INFO: namespace e2e-tests-replication-controller-bjscs deletion completed in 6.264434429s

• [SLOW TEST:21.325 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:38:30.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-12a1a82a-4ff0-11ea-960a-0242ac110007
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:38:47.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-zk4nf" for this suite.
Feb 15 12:39:11.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:39:11.557: INFO: namespace: e2e-tests-configmap-zk4nf, resource: bindings, ignored listing per whitelist
Feb 15 12:39:11.651: INFO: namespace e2e-tests-configmap-zk4nf deletion completed in 24.230285105s

• [SLOW TEST:40.685 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:39:11.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-fzrhm
Feb 15 12:39:21.966: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-fzrhm
STEP: checking the pod's current state and verifying that restartCount is present
Feb 15 12:39:21.972: INFO: Initial restart count of pod liveness-http is 0
Feb 15 12:39:48.412: INFO: Restart count of pod e2e-tests-container-probe-fzrhm/liveness-http is now 1 (26.440717546s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:39:48.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-fzrhm" for this suite.
Feb 15 12:39:56.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:39:56.666: INFO: namespace: e2e-tests-container-probe-fzrhm, resource: bindings, ignored listing per whitelist
Feb 15 12:39:56.914: INFO: namespace e2e-tests-container-probe-fzrhm deletion completed in 8.448582085s

• [SLOW TEST:45.263 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:39:56.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 15 12:39:57.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-w4pkm'
Feb 15 12:39:59.650: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 12:39:59.651: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Feb 15 12:40:03.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-w4pkm'
Feb 15 12:40:03.975: INFO: stderr: ""
Feb 15 12:40:03.976: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:40:03.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w4pkm" for this suite.
Feb 15 12:40:10.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:40:10.321: INFO: namespace: e2e-tests-kubectl-w4pkm, resource: bindings, ignored listing per whitelist
Feb 15 12:40:10.329: INFO: namespace e2e-tests-kubectl-w4pkm deletion completed in 6.341815936s

• [SLOW TEST:13.415 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:40:10.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-4dd54105-4ff0-11ea-960a-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 15 12:40:10.575: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4ddfe322-4ff0-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-jqkwz" to be "success or failure"
Feb 15 12:40:10.626: INFO: Pod "pod-projected-configmaps-4ddfe322-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 50.673118ms
Feb 15 12:40:12.704: INFO: Pod "pod-projected-configmaps-4ddfe322-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128053214s
Feb 15 12:40:14.715: INFO: Pod "pod-projected-configmaps-4ddfe322-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139305045s
Feb 15 12:40:16.726: INFO: Pod "pod-projected-configmaps-4ddfe322-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150576775s
Feb 15 12:40:18.748: INFO: Pod "pod-projected-configmaps-4ddfe322-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172306947s
Feb 15 12:40:20.762: INFO: Pod "pod-projected-configmaps-4ddfe322-4ff0-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186399499s
STEP: Saw pod success
Feb 15 12:40:20.762: INFO: Pod "pod-projected-configmaps-4ddfe322-4ff0-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 12:40:20.768: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-4ddfe322-4ff0-11ea-960a-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 12:40:21.154: INFO: Waiting for pod pod-projected-configmaps-4ddfe322-4ff0-11ea-960a-0242ac110007 to disappear
Feb 15 12:40:21.204: INFO: Pod pod-projected-configmaps-4ddfe322-4ff0-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:40:21.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jqkwz" for this suite.
Feb 15 12:40:27.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:40:27.686: INFO: namespace: e2e-tests-projected-jqkwz, resource: bindings, ignored listing per whitelist
Feb 15 12:40:27.716: INFO: namespace e2e-tests-projected-jqkwz deletion completed in 6.500530352s

• [SLOW TEST:17.387 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:40:27.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-583ed3a1-4ff0-11ea-960a-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 15 12:40:27.986: INFO: Waiting up to 5m0s for pod "pod-secrets-5841640e-4ff0-11ea-960a-0242ac110007" in namespace "e2e-tests-secrets-wbzfs" to be "success or failure"
Feb 15 12:40:28.004: INFO: Pod "pod-secrets-5841640e-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.218267ms
Feb 15 12:40:30.220: INFO: Pod "pod-secrets-5841640e-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233625535s
Feb 15 12:40:32.246: INFO: Pod "pod-secrets-5841640e-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259470172s
Feb 15 12:40:34.392: INFO: Pod "pod-secrets-5841640e-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405206785s
Feb 15 12:40:36.403: INFO: Pod "pod-secrets-5841640e-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.416833146s
Feb 15 12:40:38.425: INFO: Pod "pod-secrets-5841640e-4ff0-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.438097494s
STEP: Saw pod success
Feb 15 12:40:38.425: INFO: Pod "pod-secrets-5841640e-4ff0-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 12:40:38.446: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5841640e-4ff0-11ea-960a-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Feb 15 12:40:38.621: INFO: Waiting for pod pod-secrets-5841640e-4ff0-11ea-960a-0242ac110007 to disappear
Feb 15 12:40:38.636: INFO: Pod pod-secrets-5841640e-4ff0-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:40:38.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-wbzfs" for this suite.
Feb 15 12:40:44.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:40:44.896: INFO: namespace: e2e-tests-secrets-wbzfs, resource: bindings, ignored listing per whitelist
Feb 15 12:40:44.942: INFO: namespace e2e-tests-secrets-wbzfs deletion completed in 6.293234035s

• [SLOW TEST:17.225 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:40:44.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 15 12:40:45.410: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 15 12:40:50.419: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:40:52.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-zlpkh" for this suite.
Feb 15 12:41:02.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:41:02.894: INFO: namespace: e2e-tests-replication-controller-zlpkh, resource: bindings, ignored listing per whitelist
Feb 15 12:41:03.016: INFO: namespace e2e-tests-replication-controller-zlpkh deletion completed in 10.541411251s

• [SLOW TEST:18.074 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:41:03.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 15 12:41:29.346: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-42x2n PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:41:29.346: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:41:29.436298       8 log.go:172] (0xc000dbc4d0) (0xc002a7a640) Create stream
I0215 12:41:29.436478       8 log.go:172] (0xc000dbc4d0) (0xc002a7a640) Stream added, broadcasting: 1
I0215 12:41:29.447460       8 log.go:172] (0xc000dbc4d0) Reply frame received for 1
I0215 12:41:29.447511       8 log.go:172] (0xc000dbc4d0) (0xc002196000) Create stream
I0215 12:41:29.447529       8 log.go:172] (0xc000dbc4d0) (0xc002196000) Stream added, broadcasting: 3
I0215 12:41:29.449311       8 log.go:172] (0xc000dbc4d0) Reply frame received for 3
I0215 12:41:29.449362       8 log.go:172] (0xc000dbc4d0) (0xc001b8a320) Create stream
I0215 12:41:29.449381       8 log.go:172] (0xc000dbc4d0) (0xc001b8a320) Stream added, broadcasting: 5
I0215 12:41:29.450956       8 log.go:172] (0xc000dbc4d0) Reply frame received for 5
I0215 12:41:29.618409       8 log.go:172] (0xc000dbc4d0) Data frame received for 3
I0215 12:41:29.618506       8 log.go:172] (0xc002196000) (3) Data frame handling
I0215 12:41:29.618537       8 log.go:172] (0xc002196000) (3) Data frame sent
I0215 12:41:29.767909       8 log.go:172] (0xc000dbc4d0) Data frame received for 1
I0215 12:41:29.768214       8 log.go:172] (0xc000dbc4d0) (0xc001b8a320) Stream removed, broadcasting: 5
I0215 12:41:29.768331       8 log.go:172] (0xc002a7a640) (1) Data frame handling
I0215 12:41:29.768389       8 log.go:172] (0xc000dbc4d0) (0xc002196000) Stream removed, broadcasting: 3
I0215 12:41:29.768435       8 log.go:172] (0xc002a7a640) (1) Data frame sent
I0215 12:41:29.768461       8 log.go:172] (0xc000dbc4d0) (0xc002a7a640) Stream removed, broadcasting: 1
I0215 12:41:29.768504       8 log.go:172] (0xc000dbc4d0) Go away received
I0215 12:41:29.768820       8 log.go:172] (0xc000dbc4d0) (0xc002a7a640) Stream removed, broadcasting: 1
I0215 12:41:29.768842       8 log.go:172] (0xc000dbc4d0) (0xc002196000) Stream removed, broadcasting: 3
I0215 12:41:29.768859       8 log.go:172] (0xc000dbc4d0) (0xc001b8a320) Stream removed, broadcasting: 5
Feb 15 12:41:29.768: INFO: Exec stderr: ""
Feb 15 12:41:29.769: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-42x2n PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:41:29.769: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:41:30.006938       8 log.go:172] (0xc00099d760) (0xc002252460) Create stream
I0215 12:41:30.007201       8 log.go:172] (0xc00099d760) (0xc002252460) Stream added, broadcasting: 1
I0215 12:41:30.106029       8 log.go:172] (0xc00099d760) Reply frame received for 1
I0215 12:41:30.106438       8 log.go:172] (0xc00099d760) (0xc0022525a0) Create stream
I0215 12:41:30.106499       8 log.go:172] (0xc00099d760) (0xc0022525a0) Stream added, broadcasting: 3
I0215 12:41:30.113382       8 log.go:172] (0xc00099d760) Reply frame received for 3
I0215 12:41:30.113575       8 log.go:172] (0xc00099d760) (0xc001b8a3c0) Create stream
I0215 12:41:30.113601       8 log.go:172] (0xc00099d760) (0xc001b8a3c0) Stream added, broadcasting: 5
I0215 12:41:30.119004       8 log.go:172] (0xc00099d760) Reply frame received for 5
I0215 12:41:30.272776       8 log.go:172] (0xc00099d760) Data frame received for 3
I0215 12:41:30.272897       8 log.go:172] (0xc0022525a0) (3) Data frame handling
I0215 12:41:30.272937       8 log.go:172] (0xc0022525a0) (3) Data frame sent
I0215 12:41:30.401918       8 log.go:172] (0xc00099d760) Data frame received for 1
I0215 12:41:30.402073       8 log.go:172] (0xc00099d760) (0xc001b8a3c0) Stream removed, broadcasting: 5
I0215 12:41:30.402113       8 log.go:172] (0xc002252460) (1) Data frame handling
I0215 12:41:30.402140       8 log.go:172] (0xc00099d760) (0xc0022525a0) Stream removed, broadcasting: 3
I0215 12:41:30.402173       8 log.go:172] (0xc002252460) (1) Data frame sent
I0215 12:41:30.402212       8 log.go:172] (0xc00099d760) (0xc002252460) Stream removed, broadcasting: 1
I0215 12:41:30.402231       8 log.go:172] (0xc00099d760) Go away received
I0215 12:41:30.402528       8 log.go:172] (0xc00099d760) (0xc002252460) Stream removed, broadcasting: 1
I0215 12:41:30.402604       8 log.go:172] (0xc00099d760) (0xc0022525a0) Stream removed, broadcasting: 3
I0215 12:41:30.402620       8 log.go:172] (0xc00099d760) (0xc001b8a3c0) Stream removed, broadcasting: 5
Feb 15 12:41:30.402: INFO: Exec stderr: ""
Feb 15 12:41:30.402: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-42x2n PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:41:30.403: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:41:30.538971       8 log.go:172] (0xc00090fd90) (0xc002196320) Create stream
I0215 12:41:30.539325       8 log.go:172] (0xc00090fd90) (0xc002196320) Stream added, broadcasting: 1
I0215 12:41:30.556976       8 log.go:172] (0xc00090fd90) Reply frame received for 1
I0215 12:41:30.557156       8 log.go:172] (0xc00090fd90) (0xc001e0c0a0) Create stream
I0215 12:41:30.557189       8 log.go:172] (0xc00090fd90) (0xc001e0c0a0) Stream added, broadcasting: 3
I0215 12:41:30.559987       8 log.go:172] (0xc00090fd90) Reply frame received for 3
I0215 12:41:30.560054       8 log.go:172] (0xc00090fd90) (0xc001b8a460) Create stream
I0215 12:41:30.560090       8 log.go:172] (0xc00090fd90) (0xc001b8a460) Stream added, broadcasting: 5
I0215 12:41:30.562766       8 log.go:172] (0xc00090fd90) Reply frame received for 5
I0215 12:41:30.718812       8 log.go:172] (0xc00090fd90) Data frame received for 3
I0215 12:41:30.718936       8 log.go:172] (0xc001e0c0a0) (3) Data frame handling
I0215 12:41:30.718967       8 log.go:172] (0xc001e0c0a0) (3) Data frame sent
I0215 12:41:30.837379       8 log.go:172] (0xc00090fd90) (0xc001b8a460) Stream removed, broadcasting: 5
I0215 12:41:30.837546       8 log.go:172] (0xc00090fd90) Data frame received for 1
I0215 12:41:30.837597       8 log.go:172] (0xc00090fd90) (0xc001e0c0a0) Stream removed, broadcasting: 3
I0215 12:41:30.837670       8 log.go:172] (0xc002196320) (1) Data frame handling
I0215 12:41:30.837704       8 log.go:172] (0xc002196320) (1) Data frame sent
I0215 12:41:30.837717       8 log.go:172] (0xc00090fd90) (0xc002196320) Stream removed, broadcasting: 1
I0215 12:41:30.837742       8 log.go:172] (0xc00090fd90) Go away received
I0215 12:41:30.838202       8 log.go:172] (0xc00090fd90) (0xc002196320) Stream removed, broadcasting: 1
I0215 12:41:30.838237       8 log.go:172] (0xc00090fd90) (0xc001e0c0a0) Stream removed, broadcasting: 3
I0215 12:41:30.838282       8 log.go:172] (0xc00090fd90) (0xc001b8a460) Stream removed, broadcasting: 5
Feb 15 12:41:30.838: INFO: Exec stderr: ""
Feb 15 12:41:30.838: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-42x2n PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:41:30.838: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:41:30.984094       8 log.go:172] (0xc0014642c0) (0xc001c2a5a0) Create stream
I0215 12:41:30.984262       8 log.go:172] (0xc0014642c0) (0xc001c2a5a0) Stream added, broadcasting: 1
I0215 12:41:30.988944       8 log.go:172] (0xc0014642c0) Reply frame received for 1
I0215 12:41:30.989031       8 log.go:172] (0xc0014642c0) (0xc001c2a640) Create stream
I0215 12:41:30.989040       8 log.go:172] (0xc0014642c0) (0xc001c2a640) Stream added, broadcasting: 3
I0215 12:41:30.990708       8 log.go:172] (0xc0014642c0) Reply frame received for 3
I0215 12:41:30.990738       8 log.go:172] (0xc0014642c0) (0xc001e0c140) Create stream
I0215 12:41:30.990767       8 log.go:172] (0xc0014642c0) (0xc001e0c140) Stream added, broadcasting: 5
I0215 12:41:30.992428       8 log.go:172] (0xc0014642c0) Reply frame received for 5
I0215 12:41:31.120264       8 log.go:172] (0xc0014642c0) Data frame received for 3
I0215 12:41:31.120418       8 log.go:172] (0xc001c2a640) (3) Data frame handling
I0215 12:41:31.120462       8 log.go:172] (0xc001c2a640) (3) Data frame sent
I0215 12:41:31.275556       8 log.go:172] (0xc0014642c0) Data frame received for 1
I0215 12:41:31.275767       8 log.go:172] (0xc001c2a5a0) (1) Data frame handling
I0215 12:41:31.275822       8 log.go:172] (0xc001c2a5a0) (1) Data frame sent
I0215 12:41:31.275870       8 log.go:172] (0xc0014642c0) (0xc001c2a5a0) Stream removed, broadcasting: 1
I0215 12:41:31.276403       8 log.go:172] (0xc0014642c0) (0xc001e0c140) Stream removed, broadcasting: 5
I0215 12:41:31.276469       8 log.go:172] (0xc0014642c0) (0xc001c2a640) Stream removed, broadcasting: 3
I0215 12:41:31.276511       8 log.go:172] (0xc0014642c0) Go away received
I0215 12:41:31.276578       8 log.go:172] (0xc0014642c0) (0xc001c2a5a0) Stream removed, broadcasting: 1
I0215 12:41:31.276590       8 log.go:172] (0xc0014642c0) (0xc001c2a640) Stream removed, broadcasting: 3
I0215 12:41:31.276600       8 log.go:172] (0xc0014642c0) (0xc001e0c140) Stream removed, broadcasting: 5
Feb 15 12:41:31.276: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 15 12:41:31.276: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-42x2n PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:41:31.277: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:41:31.369239       8 log.go:172] (0xc0013162c0) (0xc001e0c320) Create stream
I0215 12:41:31.369479       8 log.go:172] (0xc0013162c0) (0xc001e0c320) Stream added, broadcasting: 1
I0215 12:41:31.374967       8 log.go:172] (0xc0013162c0) Reply frame received for 1
I0215 12:41:31.375016       8 log.go:172] (0xc0013162c0) (0xc001c2a6e0) Create stream
I0215 12:41:31.375029       8 log.go:172] (0xc0013162c0) (0xc001c2a6e0) Stream added, broadcasting: 3
I0215 12:41:31.376296       8 log.go:172] (0xc0013162c0) Reply frame received for 3
I0215 12:41:31.376332       8 log.go:172] (0xc0013162c0) (0xc001b8a5a0) Create stream
I0215 12:41:31.376342       8 log.go:172] (0xc0013162c0) (0xc001b8a5a0) Stream added, broadcasting: 5
I0215 12:41:31.379332       8 log.go:172] (0xc0013162c0) Reply frame received for 5
I0215 12:41:31.518588       8 log.go:172] (0xc0013162c0) Data frame received for 3
I0215 12:41:31.518696       8 log.go:172] (0xc001c2a6e0) (3) Data frame handling
I0215 12:41:31.518726       8 log.go:172] (0xc001c2a6e0) (3) Data frame sent
I0215 12:41:31.627212       8 log.go:172] (0xc0013162c0) (0xc001c2a6e0) Stream removed, broadcasting: 3
I0215 12:41:31.627524       8 log.go:172] (0xc0013162c0) Data frame received for 1
I0215 12:41:31.627553       8 log.go:172] (0xc001e0c320) (1) Data frame handling
I0215 12:41:31.627603       8 log.go:172] (0xc001e0c320) (1) Data frame sent
I0215 12:41:31.627624       8 log.go:172] (0xc0013162c0) (0xc001e0c320) Stream removed, broadcasting: 1
I0215 12:41:31.629389       8 log.go:172] (0xc0013162c0) (0xc001b8a5a0) Stream removed, broadcasting: 5
I0215 12:41:31.629565       8 log.go:172] (0xc0013162c0) Go away received
I0215 12:41:31.629685       8 log.go:172] (0xc0013162c0) (0xc001e0c320) Stream removed, broadcasting: 1
I0215 12:41:31.629728       8 log.go:172] (0xc0013162c0) (0xc001c2a6e0) Stream removed, broadcasting: 3
I0215 12:41:31.629794       8 log.go:172] (0xc0013162c0) (0xc001b8a5a0) Stream removed, broadcasting: 5
Feb 15 12:41:31.629: INFO: Exec stderr: ""
Feb 15 12:41:31.630: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-42x2n PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:41:31.630: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:41:31.694771       8 log.go:172] (0xc0013422c0) (0xc001b8a820) Create stream
I0215 12:41:31.694908       8 log.go:172] (0xc0013422c0) (0xc001b8a820) Stream added, broadcasting: 1
I0215 12:41:31.702101       8 log.go:172] (0xc0013422c0) Reply frame received for 1
I0215 12:41:31.702180       8 log.go:172] (0xc0013422c0) (0xc0021963c0) Create stream
I0215 12:41:31.702194       8 log.go:172] (0xc0013422c0) (0xc0021963c0) Stream added, broadcasting: 3
I0215 12:41:31.704356       8 log.go:172] (0xc0013422c0) Reply frame received for 3
I0215 12:41:31.704399       8 log.go:172] (0xc0013422c0) (0xc001e0c3c0) Create stream
I0215 12:41:31.704418       8 log.go:172] (0xc0013422c0) (0xc001e0c3c0) Stream added, broadcasting: 5
I0215 12:41:31.707455       8 log.go:172] (0xc0013422c0) Reply frame received for 5
I0215 12:41:31.891221       8 log.go:172] (0xc0013422c0) Data frame received for 3
I0215 12:41:31.891343       8 log.go:172] (0xc0021963c0) (3) Data frame handling
I0215 12:41:31.891368       8 log.go:172] (0xc0021963c0) (3) Data frame sent
I0215 12:41:32.031146       8 log.go:172] (0xc0013422c0) Data frame received for 1
I0215 12:41:32.031345       8 log.go:172] (0xc0013422c0) (0xc0021963c0) Stream removed, broadcasting: 3
I0215 12:41:32.031406       8 log.go:172] (0xc001b8a820) (1) Data frame handling
I0215 12:41:32.031446       8 log.go:172] (0xc0013422c0) (0xc001e0c3c0) Stream removed, broadcasting: 5
I0215 12:41:32.031493       8 log.go:172] (0xc001b8a820) (1) Data frame sent
I0215 12:41:32.031507       8 log.go:172] (0xc0013422c0) (0xc001b8a820) Stream removed, broadcasting: 1
I0215 12:41:32.031529       8 log.go:172] (0xc0013422c0) Go away received
I0215 12:41:32.031745       8 log.go:172] (0xc0013422c0) (0xc001b8a820) Stream removed, broadcasting: 1
I0215 12:41:32.031752       8 log.go:172] (0xc0013422c0) (0xc0021963c0) Stream removed, broadcasting: 3
I0215 12:41:32.031757       8 log.go:172] (0xc0013422c0) (0xc001e0c3c0) Stream removed, broadcasting: 5
Feb 15 12:41:32.031: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 15 12:41:32.031: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-42x2n PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:41:32.032: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:41:32.097535       8 log.go:172] (0xc000e002c0) (0xc002196780) Create stream
I0215 12:41:32.097642       8 log.go:172] (0xc000e002c0) (0xc002196780) Stream added, broadcasting: 1
I0215 12:41:32.102875       8 log.go:172] (0xc000e002c0) Reply frame received for 1
I0215 12:41:32.103007       8 log.go:172] (0xc000e002c0) (0xc001c2a780) Create stream
I0215 12:41:32.103035       8 log.go:172] (0xc000e002c0) (0xc001c2a780) Stream added, broadcasting: 3
I0215 12:41:32.104122       8 log.go:172] (0xc000e002c0) Reply frame received for 3
I0215 12:41:32.104165       8 log.go:172] (0xc000e002c0) (0xc001b8a8c0) Create stream
I0215 12:41:32.104190       8 log.go:172] (0xc000e002c0) (0xc001b8a8c0) Stream added, broadcasting: 5
I0215 12:41:32.105899       8 log.go:172] (0xc000e002c0) Reply frame received for 5
I0215 12:41:32.252780       8 log.go:172] (0xc000e002c0) Data frame received for 3
I0215 12:41:32.252909       8 log.go:172] (0xc001c2a780) (3) Data frame handling
I0215 12:41:32.252955       8 log.go:172] (0xc001c2a780) (3) Data frame sent
I0215 12:41:32.353960       8 log.go:172] (0xc000e002c0) (0xc001c2a780) Stream removed, broadcasting: 3
I0215 12:41:32.354653       8 log.go:172] (0xc000e002c0) Data frame received for 1
I0215 12:41:32.354828       8 log.go:172] (0xc000e002c0) (0xc001b8a8c0) Stream removed, broadcasting: 5
I0215 12:41:32.354928       8 log.go:172] (0xc002196780) (1) Data frame handling
I0215 12:41:32.354955       8 log.go:172] (0xc002196780) (1) Data frame sent
I0215 12:41:32.354982       8 log.go:172] (0xc000e002c0) (0xc002196780) Stream removed, broadcasting: 1
I0215 12:41:32.355055       8 log.go:172] (0xc000e002c0) Go away received
I0215 12:41:32.355375       8 log.go:172] (0xc000e002c0) (0xc002196780) Stream removed, broadcasting: 1
I0215 12:41:32.355390       8 log.go:172] (0xc000e002c0) (0xc001c2a780) Stream removed, broadcasting: 3
I0215 12:41:32.355403       8 log.go:172] (0xc000e002c0) (0xc001b8a8c0) Stream removed, broadcasting: 5
Feb 15 12:41:32.355: INFO: Exec stderr: ""
Feb 15 12:41:32.355: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-42x2n PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:41:32.355: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:41:32.475613       8 log.go:172] (0xc000e00790) (0xc002196e60) Create stream
I0215 12:41:32.475861       8 log.go:172] (0xc000e00790) (0xc002196e60) Stream added, broadcasting: 1
I0215 12:41:32.482848       8 log.go:172] (0xc000e00790) Reply frame received for 1
I0215 12:41:32.483057       8 log.go:172] (0xc000e00790) (0xc001b8a960) Create stream
I0215 12:41:32.483079       8 log.go:172] (0xc000e00790) (0xc001b8a960) Stream added, broadcasting: 3
I0215 12:41:32.488904       8 log.go:172] (0xc000e00790) Reply frame received for 3
I0215 12:41:32.488941       8 log.go:172] (0xc000e00790) (0xc002196f00) Create stream
I0215 12:41:32.488952       8 log.go:172] (0xc000e00790) (0xc002196f00) Stream added, broadcasting: 5
I0215 12:41:32.491082       8 log.go:172] (0xc000e00790) Reply frame received for 5
I0215 12:41:32.662322       8 log.go:172] (0xc000e00790) Data frame received for 3
I0215 12:41:32.662467       8 log.go:172] (0xc001b8a960) (3) Data frame handling
I0215 12:41:32.662607       8 log.go:172] (0xc001b8a960) (3) Data frame sent
I0215 12:41:32.759139       8 log.go:172] (0xc000e00790) Data frame received for 1
I0215 12:41:32.759320       8 log.go:172] (0xc002196e60) (1) Data frame handling
I0215 12:41:32.759348       8 log.go:172] (0xc002196e60) (1) Data frame sent
I0215 12:41:32.759379       8 log.go:172] (0xc000e00790) (0xc002196e60) Stream removed, broadcasting: 1
I0215 12:41:32.759610       8 log.go:172] (0xc000e00790) (0xc001b8a960) Stream removed, broadcasting: 3
I0215 12:41:32.760032       8 log.go:172] (0xc000e00790) (0xc002196f00) Stream removed, broadcasting: 5
I0215 12:41:32.760116       8 log.go:172] (0xc000e00790) (0xc002196e60) Stream removed, broadcasting: 1
I0215 12:41:32.760141       8 log.go:172] (0xc000e00790) (0xc001b8a960) Stream removed, broadcasting: 3
I0215 12:41:32.760147       8 log.go:172] (0xc000e00790) (0xc002196f00) Stream removed, broadcasting: 5
Feb 15 12:41:32.760: INFO: Exec stderr: ""
Feb 15 12:41:32.760: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-42x2n PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:41:32.761: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:41:32.762893       8 log.go:172] (0xc000e00790) EOF received
I0215 12:41:32.849505       8 log.go:172] (0xc001464790) (0xc001c2aa00) Create stream
I0215 12:41:32.849757       8 log.go:172] (0xc001464790) (0xc001c2aa00) Stream added, broadcasting: 1
I0215 12:41:32.863679       8 log.go:172] (0xc001464790) Reply frame received for 1
I0215 12:41:32.863808       8 log.go:172] (0xc001464790) (0xc001e0c460) Create stream
I0215 12:41:32.863831       8 log.go:172] (0xc001464790) (0xc001e0c460) Stream added, broadcasting: 3
I0215 12:41:32.864893       8 log.go:172] (0xc001464790) Reply frame received for 3
I0215 12:41:32.864941       8 log.go:172] (0xc001464790) (0xc001b8aa00) Create stream
I0215 12:41:32.864958       8 log.go:172] (0xc001464790) (0xc001b8aa00) Stream added, broadcasting: 5
I0215 12:41:32.865960       8 log.go:172] (0xc001464790) Reply frame received for 5
I0215 12:41:32.978017       8 log.go:172] (0xc001464790) Data frame received for 3
I0215 12:41:32.978322       8 log.go:172] (0xc001e0c460) (3) Data frame handling
I0215 12:41:32.978411       8 log.go:172] (0xc001e0c460) (3) Data frame sent
I0215 12:41:33.114710       8 log.go:172] (0xc001464790) Data frame received for 1
I0215 12:41:33.114979       8 log.go:172] (0xc001464790) (0xc001e0c460) Stream removed, broadcasting: 3
I0215 12:41:33.115081       8 log.go:172] (0xc001c2aa00) (1) Data frame handling
I0215 12:41:33.115137       8 log.go:172] (0xc001c2aa00) (1) Data frame sent
I0215 12:41:33.115181       8 log.go:172] (0xc001464790) (0xc001b8aa00) Stream removed, broadcasting: 5
I0215 12:41:33.115270       8 log.go:172] (0xc001464790) (0xc001c2aa00) Stream removed, broadcasting: 1
I0215 12:41:33.115350       8 log.go:172] (0xc001464790) Go away received
I0215 12:41:33.116029       8 log.go:172] (0xc001464790) (0xc001c2aa00) Stream removed, broadcasting: 1
I0215 12:41:33.116160       8 log.go:172] (0xc001464790) (0xc001e0c460) Stream removed, broadcasting: 3
I0215 12:41:33.116175       8 log.go:172] (0xc001464790) (0xc001b8aa00) Stream removed, broadcasting: 5
Feb 15 12:41:33.116: INFO: Exec stderr: ""
Feb 15 12:41:33.116: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-42x2n PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 12:41:33.116: INFO: >>> kubeConfig: /root/.kube/config
I0215 12:41:33.175625       8 log.go:172] (0xc00099dc30) (0xc0022528c0) Create stream
I0215 12:41:33.175698       8 log.go:172] (0xc00099dc30) (0xc0022528c0) Stream added, broadcasting: 1
I0215 12:41:33.181297       8 log.go:172] (0xc00099dc30) Reply frame received for 1
I0215 12:41:33.181352       8 log.go:172] (0xc00099dc30) (0xc001b8aaa0) Create stream
I0215 12:41:33.181367       8 log.go:172] (0xc00099dc30) (0xc001b8aaa0) Stream added, broadcasting: 3
I0215 12:41:33.182397       8 log.go:172] (0xc00099dc30) Reply frame received for 3
I0215 12:41:33.182436       8 log.go:172] (0xc00099dc30) (0xc001c2aaa0) Create stream
I0215 12:41:33.182450       8 log.go:172] (0xc00099dc30) (0xc001c2aaa0) Stream added, broadcasting: 5
I0215 12:41:33.185194       8 log.go:172] (0xc00099dc30) Reply frame received for 5
I0215 12:41:33.280790       8 log.go:172] (0xc00099dc30) Data frame received for 3
I0215 12:41:33.280861       8 log.go:172] (0xc001b8aaa0) (3) Data frame handling
I0215 12:41:33.280874       8 log.go:172] (0xc001b8aaa0) (3) Data frame sent
I0215 12:41:33.375252       8 log.go:172] (0xc00099dc30) (0xc001b8aaa0) Stream removed, broadcasting: 3
I0215 12:41:33.375564       8 log.go:172] (0xc00099dc30) Data frame received for 1
I0215 12:41:33.375598       8 log.go:172] (0xc0022528c0) (1) Data frame handling
I0215 12:41:33.375630       8 log.go:172] (0xc0022528c0) (1) Data frame sent
I0215 12:41:33.375647       8 log.go:172] (0xc00099dc30) (0xc0022528c0) Stream removed, broadcasting: 1
I0215 12:41:33.375948       8 log.go:172] (0xc00099dc30) (0xc001c2aaa0) Stream removed, broadcasting: 5
I0215 12:41:33.376103       8 log.go:172] (0xc00099dc30) Go away received
I0215 12:41:33.376292       8 log.go:172] (0xc00099dc30) (0xc0022528c0) Stream removed, broadcasting: 1
I0215 12:41:33.376383       8 log.go:172] (0xc00099dc30) (0xc001b8aaa0) Stream removed, broadcasting: 3
I0215 12:41:33.376397       8 log.go:172] (0xc00099dc30) (0xc001c2aaa0) Stream removed, broadcasting: 5
Feb 15 12:41:33.376: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:41:33.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-42x2n" for this suite.
Feb 15 12:42:27.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:42:27.567: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-42x2n, resource: bindings, ignored listing per whitelist
Feb 15 12:42:27.642: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-42x2n deletion completed in 54.24987756s

• [SLOW TEST:84.626 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:42:27.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 15 12:42:27.978: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hfbc2,SelfLink:/api/v1/namespaces/e2e-tests-watch-hfbc2/configmaps/e2e-watch-test-resource-version,UID:9fc31b4d-4ff0-11ea-a994-fa163e34d433,ResourceVersion:21758519,Generation:0,CreationTimestamp:2020-02-15 12:42:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 15 12:42:27.978: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hfbc2,SelfLink:/api/v1/namespaces/e2e-tests-watch-hfbc2/configmaps/e2e-watch-test-resource-version,UID:9fc31b4d-4ff0-11ea-a994-fa163e34d433,ResourceVersion:21758520,Generation:0,CreationTimestamp:2020-02-15 12:42:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:42:27.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-hfbc2" for this suite.
Feb 15 12:42:34.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:42:34.127: INFO: namespace: e2e-tests-watch-hfbc2, resource: bindings, ignored listing per whitelist
Feb 15 12:42:34.304: INFO: namespace e2e-tests-watch-hfbc2 deletion completed in 6.320837568s

• [SLOW TEST:6.662 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:42:34.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb 15 12:42:34.417: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix715065832/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:42:34.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-v8lzl" for this suite.
Feb 15 12:42:40.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:42:40.910: INFO: namespace: e2e-tests-kubectl-v8lzl, resource: bindings, ignored listing per whitelist
Feb 15 12:42:40.940: INFO: namespace e2e-tests-kubectl-v8lzl deletion completed in 6.27853047s

• [SLOW TEST:6.635 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:42:40.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-xqkr
STEP: Creating a pod to test atomic-volume-subpath
Feb 15 12:42:41.194: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xqkr" in namespace "e2e-tests-subpath-m29kl" to be "success or failure"
Feb 15 12:42:41.253: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Pending", Reason="", readiness=false. Elapsed: 58.809927ms
Feb 15 12:42:43.688: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493771709s
Feb 15 12:42:45.720: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.525618611s
Feb 15 12:42:47.952: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.758057831s
Feb 15 12:42:49.971: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.776682852s
Feb 15 12:42:51.996: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.802238777s
Feb 15 12:42:54.074: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.880074363s
Feb 15 12:42:56.090: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.895533718s
Feb 15 12:42:58.107: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Running", Reason="", readiness=false. Elapsed: 16.912616352s
Feb 15 12:43:00.123: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Running", Reason="", readiness=false. Elapsed: 18.928927189s
Feb 15 12:43:02.141: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Running", Reason="", readiness=false. Elapsed: 20.946933067s
Feb 15 12:43:04.157: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Running", Reason="", readiness=false. Elapsed: 22.962635247s
Feb 15 12:43:06.176: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Running", Reason="", readiness=false. Elapsed: 24.982074499s
Feb 15 12:43:08.220: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Running", Reason="", readiness=false. Elapsed: 27.025433646s
Feb 15 12:43:10.239: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Running", Reason="", readiness=false. Elapsed: 29.045262828s
Feb 15 12:43:12.255: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Running", Reason="", readiness=false. Elapsed: 31.060970673s
Feb 15 12:43:14.275: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Running", Reason="", readiness=false. Elapsed: 33.081347181s
Feb 15 12:43:16.293: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Running", Reason="", readiness=false. Elapsed: 35.098356481s
Feb 15 12:43:18.306: INFO: Pod "pod-subpath-test-downwardapi-xqkr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.111402458s
STEP: Saw pod success
Feb 15 12:43:18.306: INFO: Pod "pod-subpath-test-downwardapi-xqkr" satisfied condition "success or failure"
Feb 15 12:43:18.311: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-xqkr container test-container-subpath-downwardapi-xqkr: 
STEP: delete the pod
Feb 15 12:43:18.531: INFO: Waiting for pod pod-subpath-test-downwardapi-xqkr to disappear
Feb 15 12:43:18.550: INFO: Pod pod-subpath-test-downwardapi-xqkr no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-xqkr
Feb 15 12:43:18.550: INFO: Deleting pod "pod-subpath-test-downwardapi-xqkr" in namespace "e2e-tests-subpath-m29kl"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:43:18.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-m29kl" for this suite.
Feb 15 12:43:24.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:43:24.859: INFO: namespace: e2e-tests-subpath-m29kl, resource: bindings, ignored listing per whitelist
Feb 15 12:43:24.926: INFO: namespace e2e-tests-subpath-m29kl deletion completed in 6.32695418s

• [SLOW TEST:43.986 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:43:24.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 15 12:43:25.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 15 12:43:25.277: INFO: stderr: ""
Feb 15 12:43:25.277: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:43:25.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wt6sw" for this suite.
Feb 15 12:43:31.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:43:31.525: INFO: namespace: e2e-tests-kubectl-wt6sw, resource: bindings, ignored listing per whitelist
Feb 15 12:43:31.599: INFO: namespace e2e-tests-kubectl-wt6sw deletion completed in 6.311831033s

• [SLOW TEST:6.672 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:43:31.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb 15 12:43:31.888: INFO: namespace e2e-tests-kubectl-7bxcg
Feb 15 12:43:31.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7bxcg'
Feb 15 12:43:32.416: INFO: stderr: ""
Feb 15 12:43:32.417: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 15 12:43:33.957: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 12:43:33.957: INFO: Found 0 / 1
Feb 15 12:43:34.450: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 12:43:34.451: INFO: Found 0 / 1
Feb 15 12:43:35.430: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 12:43:35.430: INFO: Found 0 / 1
Feb 15 12:43:36.457: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 12:43:36.457: INFO: Found 0 / 1
Feb 15 12:43:37.948: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 12:43:37.948: INFO: Found 0 / 1
Feb 15 12:43:38.450: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 12:43:38.451: INFO: Found 0 / 1
Feb 15 12:43:39.431: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 12:43:39.431: INFO: Found 0 / 1
Feb 15 12:43:40.430: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 12:43:40.430: INFO: Found 0 / 1
Feb 15 12:43:41.444: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 12:43:41.444: INFO: Found 1 / 1
Feb 15 12:43:41.444: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 15 12:43:41.452: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 12:43:41.452: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 15 12:43:41.452: INFO: wait on redis-master startup in e2e-tests-kubectl-7bxcg 
Feb 15 12:43:41.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j48gr redis-master --namespace=e2e-tests-kubectl-7bxcg'
Feb 15 12:43:41.680: INFO: stderr: ""
Feb 15 12:43:41.680: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 15 Feb 12:43:40.133 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Feb 12:43:40.133 # Server started, Redis version 3.2.12\n1:M 15 Feb 12:43:40.135 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Feb 12:43:40.135 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 15 12:43:41.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-7bxcg'
Feb 15 12:43:41.907: INFO: stderr: ""
Feb 15 12:43:41.907: INFO: stdout: "service/rm2 exposed\n"
Feb 15 12:43:41.914: INFO: Service rm2 in namespace e2e-tests-kubectl-7bxcg found.
STEP: exposing service
Feb 15 12:43:43.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-7bxcg'
Feb 15 12:43:44.500: INFO: stderr: ""
Feb 15 12:43:44.500: INFO: stdout: "service/rm3 exposed\n"
Feb 15 12:43:44.517: INFO: Service rm3 in namespace e2e-tests-kubectl-7bxcg found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:43:46.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7bxcg" for this suite.
Feb 15 12:44:12.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:44:12.785: INFO: namespace: e2e-tests-kubectl-7bxcg, resource: bindings, ignored listing per whitelist
Feb 15 12:44:12.865: INFO: namespace e2e-tests-kubectl-7bxcg deletion completed in 26.256030758s

• [SLOW TEST:41.266 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:44:12.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0215 12:44:23.215298       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 15 12:44:23.215: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:44:23.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-znndw" for this suite.
Feb 15 12:44:31.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:44:31.283: INFO: namespace: e2e-tests-gc-znndw, resource: bindings, ignored listing per whitelist
Feb 15 12:44:31.444: INFO: namespace e2e-tests-gc-znndw deletion completed in 8.223219309s

• [SLOW TEST:18.578 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:44:31.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-ldb8m.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ldb8m.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ldb8m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-ldb8m.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ldb8m.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ldb8m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 15 12:44:46.448: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.456: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.477: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.492: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.514: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.526: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.538: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ldb8m.svc.cluster.local from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.554: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.562: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.572: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.586: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.599: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.607: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.616: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.626: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.635: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.644: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ldb8m.svc.cluster.local from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.651: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.657: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.662: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007: the server could not find the requested resource (get pods dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007)
Feb 15 12:44:46.662: INFO: Lookups using e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ldb8m.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ldb8m.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 15 12:44:51.846: INFO: DNS probes using e2e-tests-dns-ldb8m/dns-test-e9b6cd29-4ff0-11ea-960a-0242ac110007 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:44:51.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-ldb8m" for this suite.
Feb 15 12:45:00.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:45:00.410: INFO: namespace: e2e-tests-dns-ldb8m, resource: bindings, ignored listing per whitelist
Feb 15 12:45:00.633: INFO: namespace e2e-tests-dns-ldb8m deletion completed in 8.596214107s

• [SLOW TEST:29.189 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:45:00.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 15 12:45:00.984: INFO: Waiting up to 5m0s for pod "pod-fafb5567-4ff0-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-cwrvt" to be "success or failure"
Feb 15 12:45:01.005: INFO: Pod "pod-fafb5567-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 20.581089ms
Feb 15 12:45:03.019: INFO: Pod "pod-fafb5567-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03530954s
Feb 15 12:45:05.092: INFO: Pod "pod-fafb5567-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107343393s
Feb 15 12:45:07.174: INFO: Pod "pod-fafb5567-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189928969s
Feb 15 12:45:09.184: INFO: Pod "pod-fafb5567-4ff0-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199458085s
Feb 15 12:45:11.200: INFO: Pod "pod-fafb5567-4ff0-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.215445647s
STEP: Saw pod success
Feb 15 12:45:11.200: INFO: Pod "pod-fafb5567-4ff0-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 12:45:11.206: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fafb5567-4ff0-11ea-960a-0242ac110007 container test-container: 
STEP: delete the pod
Feb 15 12:45:11.521: INFO: Waiting for pod pod-fafb5567-4ff0-11ea-960a-0242ac110007 to disappear
Feb 15 12:45:11.531: INFO: Pod pod-fafb5567-4ff0-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:45:11.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cwrvt" for this suite.
Feb 15 12:45:17.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:45:17.678: INFO: namespace: e2e-tests-emptydir-cwrvt, resource: bindings, ignored listing per whitelist
Feb 15 12:45:17.742: INFO: namespace e2e-tests-emptydir-cwrvt deletion completed in 6.20150141s

• [SLOW TEST:17.106 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:45:17.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-051b3603-4ff1-11ea-960a-0242ac110007
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-051b3603-4ff1-11ea-960a-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:45:28.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6bp76" for this suite.
Feb 15 12:45:54.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:45:54.179: INFO: namespace: e2e-tests-projected-6bp76, resource: bindings, ignored listing per whitelist
Feb 15 12:45:54.290: INFO: namespace e2e-tests-projected-6bp76 deletion completed in 26.187926249s

• [SLOW TEST:36.549 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:45:54.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0215 12:46:35.277668       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 15 12:46:35.277: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:46:35.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-9gkt5" for this suite.
Feb 15 12:46:55.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:46:55.504: INFO: namespace: e2e-tests-gc-9gkt5, resource: bindings, ignored listing per whitelist
Feb 15 12:46:55.560: INFO: namespace e2e-tests-gc-9gkt5 deletion completed in 20.274475881s

• [SLOW TEST:61.268 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:46:55.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-3fcc8523-4ff1-11ea-960a-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 15 12:46:56.619: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3fce3d4e-4ff1-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-7wklt" to be "success or failure"
Feb 15 12:46:56.632: INFO: Pod "pod-projected-configmaps-3fce3d4e-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.372977ms
Feb 15 12:46:58.756: INFO: Pod "pod-projected-configmaps-3fce3d4e-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136769713s
Feb 15 12:47:00.780: INFO: Pod "pod-projected-configmaps-3fce3d4e-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160147843s
Feb 15 12:47:02.801: INFO: Pod "pod-projected-configmaps-3fce3d4e-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181247562s
Feb 15 12:47:04.816: INFO: Pod "pod-projected-configmaps-3fce3d4e-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197124748s
Feb 15 12:47:08.531: INFO: Pod "pod-projected-configmaps-3fce3d4e-4ff1-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.911723385s
STEP: Saw pod success
Feb 15 12:47:08.532: INFO: Pod "pod-projected-configmaps-3fce3d4e-4ff1-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 12:47:08.612: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-3fce3d4e-4ff1-11ea-960a-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 12:47:09.117: INFO: Waiting for pod pod-projected-configmaps-3fce3d4e-4ff1-11ea-960a-0242ac110007 to disappear
Feb 15 12:47:09.137: INFO: Pod pod-projected-configmaps-3fce3d4e-4ff1-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:47:09.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7wklt" for this suite.
Feb 15 12:47:15.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:47:15.229: INFO: namespace: e2e-tests-projected-7wklt, resource: bindings, ignored listing per whitelist
Feb 15 12:47:15.328: INFO: namespace e2e-tests-projected-7wklt deletion completed in 6.178450316s

• [SLOW TEST:19.768 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:47:15.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-4b2f4f3a-4ff1-11ea-960a-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 15 12:47:15.558: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-h977t" to be "success or failure"
Feb 15 12:47:15.581: INFO: Pod "pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 22.381991ms
Feb 15 12:47:17.719: INFO: Pod "pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160495049s
Feb 15 12:47:19.731: INFO: Pod "pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172790247s
Feb 15 12:47:21.744: INFO: Pod "pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185480185s
Feb 15 12:47:23.757: INFO: Pod "pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198531761s
Feb 15 12:47:25.779: INFO: Pod "pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.220259308s
Feb 15 12:47:29.083: INFO: Pod "pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.524619961s
STEP: Saw pod success
Feb 15 12:47:29.084: INFO: Pod "pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 12:47:29.146: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 12:47:29.311: INFO: Waiting for pod pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007 to disappear
Feb 15 12:47:29.381: INFO: Pod pod-projected-configmaps-4b317001-4ff1-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:47:29.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h977t" for this suite.
Feb 15 12:47:35.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:47:35.597: INFO: namespace: e2e-tests-projected-h977t, resource: bindings, ignored listing per whitelist
Feb 15 12:47:35.598: INFO: namespace e2e-tests-projected-h977t deletion completed in 6.202236687s

• [SLOW TEST:20.269 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:47:35.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Feb 15 12:47:35.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 15 12:47:36.102: INFO: stderr: ""
Feb 15 12:47:36.102: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:47:36.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-llwl6" for this suite.
Feb 15 12:47:42.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:47:42.701: INFO: namespace: e2e-tests-kubectl-llwl6, resource: bindings, ignored listing per whitelist
Feb 15 12:47:42.737: INFO: namespace e2e-tests-kubectl-llwl6 deletion completed in 6.618280051s

• [SLOW TEST:7.139 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:47:42.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 15 12:47:53.516: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5b7cdeb7-4ff1-11ea-960a-0242ac110007"
Feb 15 12:47:53.516: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5b7cdeb7-4ff1-11ea-960a-0242ac110007" in namespace "e2e-tests-pods-t8dlw" to be "terminated due to deadline exceeded"
Feb 15 12:47:53.556: INFO: Pod "pod-update-activedeadlineseconds-5b7cdeb7-4ff1-11ea-960a-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 39.34117ms
Feb 15 12:47:55.580: INFO: Pod "pod-update-activedeadlineseconds-5b7cdeb7-4ff1-11ea-960a-0242ac110007": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.06355675s
Feb 15 12:47:55.580: INFO: Pod "pod-update-activedeadlineseconds-5b7cdeb7-4ff1-11ea-960a-0242ac110007" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:47:55.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-t8dlw" for this suite.
Feb 15 12:48:01.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:48:01.684: INFO: namespace: e2e-tests-pods-t8dlw, resource: bindings, ignored listing per whitelist
Feb 15 12:48:01.837: INFO: namespace e2e-tests-pods-t8dlw deletion completed in 6.235917574s

• [SLOW TEST:19.100 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:48:01.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-66f13e51-4ff1-11ea-960a-0242ac110007
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-66f13e51-4ff1-11ea-960a-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:48:12.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-p8sk7" for this suite.
Feb 15 12:48:36.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:48:36.696: INFO: namespace: e2e-tests-configmap-p8sk7, resource: bindings, ignored listing per whitelist
Feb 15 12:48:36.743: INFO: namespace e2e-tests-configmap-p8sk7 deletion completed in 24.324520841s

• [SLOW TEST:34.905 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:48:36.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-7bb15ce8-4ff1-11ea-960a-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 15 12:48:36.940: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-v5wzx" to be "success or failure"
Feb 15 12:48:36.962: INFO: Pod "pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 22.062955ms
Feb 15 12:48:38.989: INFO: Pod "pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049408829s
Feb 15 12:48:41.157: INFO: Pod "pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21669349s
Feb 15 12:48:43.180: INFO: Pod "pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.24021874s
Feb 15 12:48:45.205: INFO: Pod "pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265489607s
Feb 15 12:48:47.236: INFO: Pod "pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.295681852s
Feb 15 12:48:49.325: INFO: Pod "pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.385511604s
STEP: Saw pod success
Feb 15 12:48:49.326: INFO: Pod "pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 12:48:49.339: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Feb 15 12:48:49.900: INFO: Waiting for pod pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007 to disappear
Feb 15 12:48:49.917: INFO: Pod pod-projected-secrets-7bb24a47-4ff1-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:48:49.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v5wzx" for this suite.
Feb 15 12:48:56.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:48:56.299: INFO: namespace: e2e-tests-projected-v5wzx, resource: bindings, ignored listing per whitelist
Feb 15 12:48:56.299: INFO: namespace e2e-tests-projected-v5wzx deletion completed in 6.371938978s

• [SLOW TEST:19.555 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:48:56.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 15 12:48:58.007: INFO: Pod name wrapped-volume-race-883c58ab-4ff1-11ea-960a-0242ac110007: Found 0 pods out of 5
Feb 15 12:49:03.032: INFO: Pod name wrapped-volume-race-883c58ab-4ff1-11ea-960a-0242ac110007: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-883c58ab-4ff1-11ea-960a-0242ac110007 in namespace e2e-tests-emptydir-wrapper-88th9, will wait for the garbage collector to delete the pods
Feb 15 12:50:45.185: INFO: Deleting ReplicationController wrapped-volume-race-883c58ab-4ff1-11ea-960a-0242ac110007 took: 36.578633ms
Feb 15 12:50:45.586: INFO: Terminating ReplicationController wrapped-volume-race-883c58ab-4ff1-11ea-960a-0242ac110007 pods took: 401.241933ms
STEP: Creating RC which spawns configmap-volume pods
Feb 15 12:51:34.051: INFO: Pod name wrapped-volume-race-e52d403a-4ff1-11ea-960a-0242ac110007: Found 0 pods out of 5
Feb 15 12:51:39.079: INFO: Pod name wrapped-volume-race-e52d403a-4ff1-11ea-960a-0242ac110007: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e52d403a-4ff1-11ea-960a-0242ac110007 in namespace e2e-tests-emptydir-wrapper-88th9, will wait for the garbage collector to delete the pods
Feb 15 12:53:23.221: INFO: Deleting ReplicationController wrapped-volume-race-e52d403a-4ff1-11ea-960a-0242ac110007 took: 30.904119ms
Feb 15 12:53:23.622: INFO: Terminating ReplicationController wrapped-volume-race-e52d403a-4ff1-11ea-960a-0242ac110007 pods took: 400.769547ms
STEP: Creating RC which spawns configmap-volume pods
Feb 15 12:54:12.857: INFO: Pod name wrapped-volume-race-43d994a2-4ff2-11ea-960a-0242ac110007: Found 0 pods out of 5
Feb 15 12:54:17.919: INFO: Pod name wrapped-volume-race-43d994a2-4ff2-11ea-960a-0242ac110007: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-43d994a2-4ff2-11ea-960a-0242ac110007 in namespace e2e-tests-emptydir-wrapper-88th9, will wait for the garbage collector to delete the pods
Feb 15 12:56:14.347: INFO: Deleting ReplicationController wrapped-volume-race-43d994a2-4ff2-11ea-960a-0242ac110007 took: 100.394671ms
Feb 15 12:56:15.247: INFO: Terminating ReplicationController wrapped-volume-race-43d994a2-4ff2-11ea-960a-0242ac110007 pods took: 900.837704ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:57:10.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-88th9" for this suite.
Feb 15 12:57:22.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:57:23.542: INFO: namespace: e2e-tests-emptydir-wrapper-88th9, resource: bindings, ignored listing per whitelist
Feb 15 12:57:23.557: INFO: namespace e2e-tests-emptydir-wrapper-88th9 deletion completed in 12.856435352s

• [SLOW TEST:507.258 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:57:23.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 15 12:58:02.419: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:02.446: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 12:58:04.447: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:04.462: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 12:58:06.447: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:06.478: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 12:58:08.447: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:08.833: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 12:58:10.447: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:10.633: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 12:58:12.447: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:12.478: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 12:58:14.447: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:14.470: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 12:58:16.447: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:16.473: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 12:58:18.447: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:18.468: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 12:58:20.447: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:20.476: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 12:58:22.449: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:22.482: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 12:58:24.447: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 12:58:25.080: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:58:25.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lnx8p" for this suite.
Feb 15 12:58:49.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:58:49.657: INFO: namespace: e2e-tests-container-lifecycle-hook-lnx8p, resource: bindings, ignored listing per whitelist
Feb 15 12:58:49.707: INFO: namespace e2e-tests-container-lifecycle-hook-lnx8p deletion completed in 24.342258003s

• [SLOW TEST:86.149 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:58:49.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 15 12:58:50.391: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e936fcc6-4ff2-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0016b97fa), BlockOwnerDeletion:(*bool)(0xc0016b97fb)}}
Feb 15 12:58:50.421: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e91d2f36-4ff2-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001d5304a), BlockOwnerDeletion:(*bool)(0xc001d5304b)}}
Feb 15 12:58:50.569: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e920b1b9-4ff2-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001d535e2), BlockOwnerDeletion:(*bool)(0xc001d535e3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:58:55.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-5dmhs" for this suite.
Feb 15 12:59:02.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:59:02.367: INFO: namespace: e2e-tests-gc-5dmhs, resource: bindings, ignored listing per whitelist
Feb 15 12:59:02.377: INFO: namespace e2e-tests-gc-5dmhs deletion completed in 6.423690662s

• [SLOW TEST:12.670 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:59:02.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f0cc4c5c-4ff2-11ea-960a-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 15 12:59:02.930: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-q9px5" to be "success or failure"
Feb 15 12:59:03.034: INFO: Pod "pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 104.132423ms
Feb 15 12:59:05.099: INFO: Pod "pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168603961s
Feb 15 12:59:07.119: INFO: Pod "pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188680292s
Feb 15 12:59:10.004: INFO: Pod "pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.073891778s
Feb 15 12:59:14.021: INFO: Pod "pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.091190964s
Feb 15 12:59:16.045: INFO: Pod "pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.114737082s
Feb 15 12:59:18.063: INFO: Pod "pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.132519752s
STEP: Saw pod success
Feb 15 12:59:18.063: INFO: Pod "pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 12:59:18.069: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 12:59:18.189: INFO: Waiting for pod pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007 to disappear
Feb 15 12:59:18.228: INFO: Pod pod-projected-configmaps-f0d115de-4ff2-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:59:18.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q9px5" for this suite.
Feb 15 12:59:24.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 12:59:24.326: INFO: namespace: e2e-tests-projected-q9px5, resource: bindings, ignored listing per whitelist
Feb 15 12:59:24.434: INFO: namespace e2e-tests-projected-q9px5 deletion completed in 6.198806915s

• [SLOW TEST:22.057 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 12:59:24.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 15 12:59:45.028: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 15 12:59:45.040: INFO: Pod pod-with-poststart-http-hook still exists
Feb 15 12:59:47.041: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 15 12:59:47.738: INFO: Pod pod-with-poststart-http-hook still exists
Feb 15 12:59:49.041: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 15 12:59:49.105: INFO: Pod pod-with-poststart-http-hook still exists
Feb 15 12:59:51.042: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 15 12:59:51.156: INFO: Pod pod-with-poststart-http-hook still exists
Feb 15 12:59:53.041: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 15 12:59:53.060: INFO: Pod pod-with-poststart-http-hook still exists
Feb 15 12:59:55.041: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 15 12:59:55.061: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 12:59:55.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-bcg8m" for this suite.
Feb 15 13:00:19.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:00:19.204: INFO: namespace: e2e-tests-container-lifecycle-hook-bcg8m, resource: bindings, ignored listing per whitelist
Feb 15 13:00:19.286: INFO: namespace e2e-tests-container-lifecycle-hook-bcg8m deletion completed in 24.212256848s

• [SLOW TEST:54.851 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:00:19.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-1ea6c77b-4ff3-11ea-960a-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 15 13:00:19.900: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-mhtr4" to be "success or failure"
Feb 15 13:00:20.101: INFO: Pod "pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 201.1007ms
Feb 15 13:00:22.753: INFO: Pod "pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.852714515s
Feb 15 13:00:24.762: INFO: Pod "pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.861706801s
Feb 15 13:00:26.785: INFO: Pod "pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.885213935s
Feb 15 13:00:28.825: INFO: Pod "pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.925047672s
Feb 15 13:00:30.846: INFO: Pod "pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.946087091s
Feb 15 13:00:32.864: INFO: Pod "pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.964273815s
STEP: Saw pod success
Feb 15 13:00:32.864: INFO: Pod "pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 13:00:32.868: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 13:00:33.414: INFO: Waiting for pod pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007 to disappear
Feb 15 13:00:33.635: INFO: Pod pod-projected-configmaps-1ea8e673-4ff3-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:00:33.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mhtr4" for this suite.
Feb 15 13:00:39.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:00:39.977: INFO: namespace: e2e-tests-projected-mhtr4, resource: bindings, ignored listing per whitelist
Feb 15 13:00:39.985: INFO: namespace e2e-tests-projected-mhtr4 deletion completed in 6.327924698s

• [SLOW TEST:20.699 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:00:39.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 15 13:00:40.451: INFO: Number of nodes with available pods: 0
Feb 15 13:00:40.452: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:41.465: INFO: Number of nodes with available pods: 0
Feb 15 13:00:41.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:42.610: INFO: Number of nodes with available pods: 0
Feb 15 13:00:42.610: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:43.831: INFO: Number of nodes with available pods: 0
Feb 15 13:00:43.831: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:44.521: INFO: Number of nodes with available pods: 0
Feb 15 13:00:44.521: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:45.469: INFO: Number of nodes with available pods: 0
Feb 15 13:00:45.469: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:48.278: INFO: Number of nodes with available pods: 0
Feb 15 13:00:48.278: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:48.687: INFO: Number of nodes with available pods: 0
Feb 15 13:00:48.687: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:49.502: INFO: Number of nodes with available pods: 0
Feb 15 13:00:49.503: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:50.492: INFO: Number of nodes with available pods: 0
Feb 15 13:00:50.492: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:51.477: INFO: Number of nodes with available pods: 0
Feb 15 13:00:51.477: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:52.547: INFO: Number of nodes with available pods: 1
Feb 15 13:00:52.547: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 15 13:00:52.714: INFO: Number of nodes with available pods: 0
Feb 15 13:00:52.714: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:53.769: INFO: Number of nodes with available pods: 0
Feb 15 13:00:53.769: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:55.034: INFO: Number of nodes with available pods: 0
Feb 15 13:00:55.034: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:55.993: INFO: Number of nodes with available pods: 0
Feb 15 13:00:55.993: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:57.612: INFO: Number of nodes with available pods: 0
Feb 15 13:00:57.612: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:57.901: INFO: Number of nodes with available pods: 0
Feb 15 13:00:57.901: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:58.731: INFO: Number of nodes with available pods: 0
Feb 15 13:00:58.731: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:00:59.740: INFO: Number of nodes with available pods: 0
Feb 15 13:00:59.741: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:00.921: INFO: Number of nodes with available pods: 0
Feb 15 13:01:00.921: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:01.733: INFO: Number of nodes with available pods: 0
Feb 15 13:01:01.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:02.769: INFO: Number of nodes with available pods: 0
Feb 15 13:01:02.769: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:03.731: INFO: Number of nodes with available pods: 0
Feb 15 13:01:03.731: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:05.156: INFO: Number of nodes with available pods: 0
Feb 15 13:01:05.156: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:05.907: INFO: Number of nodes with available pods: 0
Feb 15 13:01:05.908: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:06.871: INFO: Number of nodes with available pods: 0
Feb 15 13:01:06.871: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:07.744: INFO: Number of nodes with available pods: 0
Feb 15 13:01:07.744: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:08.725: INFO: Number of nodes with available pods: 0
Feb 15 13:01:08.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:09.740: INFO: Number of nodes with available pods: 0
Feb 15 13:01:09.741: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:11.832: INFO: Number of nodes with available pods: 0
Feb 15 13:01:11.832: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:12.772: INFO: Number of nodes with available pods: 0
Feb 15 13:01:12.772: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:13.745: INFO: Number of nodes with available pods: 0
Feb 15 13:01:13.746: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:14.757: INFO: Number of nodes with available pods: 0
Feb 15 13:01:14.758: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:15.744: INFO: Number of nodes with available pods: 0
Feb 15 13:01:15.744: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 15 13:01:16.753: INFO: Number of nodes with available pods: 1
Feb 15 13:01:16.753: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-clgdg, will wait for the garbage collector to delete the pods
Feb 15 13:01:16.837: INFO: Deleting DaemonSet.extensions daemon-set took: 20.369819ms
Feb 15 13:01:16.938: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.580823ms
Feb 15 13:01:32.777: INFO: Number of nodes with available pods: 0
Feb 15 13:01:32.778: INFO: Number of running nodes: 0, number of available pods: 0
Feb 15 13:01:32.788: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-clgdg/daemonsets","resourceVersion":"21761010"},"items":null}

Feb 15 13:01:32.804: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-clgdg/pods","resourceVersion":"21761010"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:01:32.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-clgdg" for this suite.
Feb 15 13:01:40.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:01:41.048: INFO: namespace: e2e-tests-daemonsets-clgdg, resource: bindings, ignored listing per whitelist
Feb 15 13:01:41.320: INFO: namespace e2e-tests-daemonsets-clgdg deletion completed in 8.386717023s

• [SLOW TEST:61.335 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:01:41.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 15 13:01:41.760: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007" in namespace "e2e-tests-downward-api-fcm4n" to be "success or failure"
Feb 15 13:01:41.787: INFO: Pod "downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 26.646863ms
Feb 15 13:01:43.912: INFO: Pod "downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151487201s
Feb 15 13:01:45.933: INFO: Pod "downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173090239s
Feb 15 13:01:48.492: INFO: Pod "downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.731362584s
Feb 15 13:01:50.514: INFO: Pod "downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.753202438s
Feb 15 13:01:52.553: INFO: Pod "downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.792473516s
Feb 15 13:01:54.590: INFO: Pod "downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.829738331s
STEP: Saw pod success
Feb 15 13:01:54.590: INFO: Pod "downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 13:01:54.611: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007 container client-container: 
STEP: delete the pod
Feb 15 13:01:54.835: INFO: Waiting for pod downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007 to disappear
Feb 15 13:01:54.853: INFO: Pod downwardapi-volume-4f70f857-4ff3-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:01:54.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fcm4n" for this suite.
Feb 15 13:02:03.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:02:03.130: INFO: namespace: e2e-tests-downward-api-fcm4n, resource: bindings, ignored listing per whitelist
Feb 15 13:02:03.207: INFO: namespace e2e-tests-downward-api-fcm4n deletion completed in 8.329024723s

• [SLOW TEST:21.887 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:02:03.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 15 13:02:03.658: INFO: Waiting up to 5m0s for pod "pod-5c795ce2-4ff3-11ea-960a-0242ac110007" in namespace "e2e-tests-emptydir-ltgl7" to be "success or failure"
Feb 15 13:02:03.683: INFO: Pod "pod-5c795ce2-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 25.66551ms
Feb 15 13:02:06.428: INFO: Pod "pod-5c795ce2-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.770142586s
Feb 15 13:02:08.441: INFO: Pod "pod-5c795ce2-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.783106178s
Feb 15 13:02:11.600: INFO: Pod "pod-5c795ce2-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.942303767s
Feb 15 13:02:13.636: INFO: Pod "pod-5c795ce2-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.97782252s
Feb 15 13:02:15.646: INFO: Pod "pod-5c795ce2-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.987960649s
Feb 15 13:02:17.659: INFO: Pod "pod-5c795ce2-4ff3-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.000951547s
STEP: Saw pod success
Feb 15 13:02:17.659: INFO: Pod "pod-5c795ce2-4ff3-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 13:02:17.669: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5c795ce2-4ff3-11ea-960a-0242ac110007 container test-container: 
STEP: delete the pod
Feb 15 13:02:17.753: INFO: Waiting for pod pod-5c795ce2-4ff3-11ea-960a-0242ac110007 to disappear
Feb 15 13:02:18.720: INFO: Pod pod-5c795ce2-4ff3-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:02:18.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ltgl7" for this suite.
Feb 15 13:02:25.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:02:25.812: INFO: namespace: e2e-tests-emptydir-ltgl7, resource: bindings, ignored listing per whitelist
Feb 15 13:02:25.888: INFO: namespace e2e-tests-emptydir-ltgl7 deletion completed in 7.145970184s

• [SLOW TEST:22.680 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:02:25.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 15 13:02:26.343: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-wrnb4" to be "success or failure"
Feb 15 13:02:26.466: INFO: Pod "downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 122.032153ms
Feb 15 13:02:28.814: INFO: Pod "downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470318004s
Feb 15 13:02:30.873: INFO: Pod "downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.529766787s
Feb 15 13:02:33.719: INFO: Pod "downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.375258132s
Feb 15 13:02:35.735: INFO: Pod "downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.391636193s
Feb 15 13:02:37.769: INFO: Pod "downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.425300526s
Feb 15 13:02:40.348: INFO: Pod "downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.00448235s
STEP: Saw pod success
Feb 15 13:02:40.348: INFO: Pod "downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 13:02:40.405: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007 container client-container: 
STEP: delete the pod
Feb 15 13:02:40.712: INFO: Waiting for pod downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007 to disappear
Feb 15 13:02:40.858: INFO: Pod downwardapi-volume-6a0db5b4-4ff3-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:02:40.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wrnb4" for this suite.
Feb 15 13:02:46.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:02:47.075: INFO: namespace: e2e-tests-projected-wrnb4, resource: bindings, ignored listing per whitelist
Feb 15 13:02:47.080: INFO: namespace e2e-tests-projected-wrnb4 deletion completed in 6.203101953s

• [SLOW TEST:21.192 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:02:47.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-7s9v
STEP: Creating a pod to test atomic-volume-subpath
Feb 15 13:02:47.376: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7s9v" in namespace "e2e-tests-subpath-rpj6n" to be "success or failure"
Feb 15 13:02:47.444: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Pending", Reason="", readiness=false. Elapsed: 68.163315ms
Feb 15 13:02:49.461: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084945123s
Feb 15 13:02:51.481: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10466667s
Feb 15 13:02:53.590: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213519782s
Feb 15 13:02:55.615: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.239252633s
Feb 15 13:02:57.625: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.248736864s
Feb 15 13:02:59.796: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Pending", Reason="", readiness=false. Elapsed: 12.419649775s
Feb 15 13:03:02.109: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Pending", Reason="", readiness=false. Elapsed: 14.732797127s
Feb 15 13:03:04.231: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Pending", Reason="", readiness=false. Elapsed: 16.854550589s
Feb 15 13:03:06.259: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Running", Reason="", readiness=false. Elapsed: 18.88266675s
Feb 15 13:03:08.296: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Running", Reason="", readiness=false. Elapsed: 20.919956208s
Feb 15 13:03:10.331: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Running", Reason="", readiness=false. Elapsed: 22.954897884s
Feb 15 13:03:12.348: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Running", Reason="", readiness=false. Elapsed: 24.971453097s
Feb 15 13:03:14.366: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Running", Reason="", readiness=false. Elapsed: 26.989506915s
Feb 15 13:03:16.384: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Running", Reason="", readiness=false. Elapsed: 29.007926165s
Feb 15 13:03:18.427: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Running", Reason="", readiness=false. Elapsed: 31.051283876s
Feb 15 13:03:20.450: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Running", Reason="", readiness=false. Elapsed: 33.073565737s
Feb 15 13:03:22.519: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Running", Reason="", readiness=false. Elapsed: 35.143268493s
Feb 15 13:03:24.548: INFO: Pod "pod-subpath-test-projected-7s9v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.171904328s
STEP: Saw pod success
Feb 15 13:03:24.548: INFO: Pod "pod-subpath-test-projected-7s9v" satisfied condition "success or failure"
Feb 15 13:03:24.579: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-7s9v container test-container-subpath-projected-7s9v: 
STEP: delete the pod
Feb 15 13:03:24.948: INFO: Waiting for pod pod-subpath-test-projected-7s9v to disappear
Feb 15 13:03:24.974: INFO: Pod pod-subpath-test-projected-7s9v no longer exists
STEP: Deleting pod pod-subpath-test-projected-7s9v
Feb 15 13:03:24.974: INFO: Deleting pod "pod-subpath-test-projected-7s9v" in namespace "e2e-tests-subpath-rpj6n"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:03:25.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-rpj6n" for this suite.
Feb 15 13:03:33.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:03:33.231: INFO: namespace: e2e-tests-subpath-rpj6n, resource: bindings, ignored listing per whitelist
Feb 15 13:03:33.235: INFO: namespace e2e-tests-subpath-rpj6n deletion completed in 8.182840094s

• [SLOW TEST:46.155 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:03:33.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb 15 13:03:45.618: INFO: Pod pod-hostip-922338dc-4ff3-11ea-960a-0242ac110007 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:03:45.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6ljzg" for this suite.
Feb 15 13:04:09.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:04:09.870: INFO: namespace: e2e-tests-pods-6ljzg, resource: bindings, ignored listing per whitelist
Feb 15 13:04:09.881: INFO: namespace e2e-tests-pods-6ljzg deletion completed in 24.255696059s

• [SLOW TEST:36.645 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:04:09.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-a7ed06e5-4ff3-11ea-960a-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 15 13:04:10.242: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007" in namespace "e2e-tests-projected-8nj4x" to be "success or failure"
Feb 15 13:04:10.268: INFO: Pod "pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 25.050464ms
Feb 15 13:04:13.183: INFO: Pod "pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.940435335s
Feb 15 13:04:15.209: INFO: Pod "pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.966788098s
Feb 15 13:04:17.238: INFO: Pod "pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.995851682s
Feb 15 13:04:19.638: INFO: Pod "pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.395779078s
Feb 15 13:04:21.679: INFO: Pod "pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.436788859s
Feb 15 13:04:23.837: INFO: Pod "pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.594459155s
Feb 15 13:04:25.913: INFO: Pod "pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.67043936s
STEP: Saw pod success
Feb 15 13:04:25.913: INFO: Pod "pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007" satisfied condition "success or failure"
Feb 15 13:04:25.923: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007 container projected-secret-volume-test: 
STEP: delete the pod
Feb 15 13:04:26.243: INFO: Waiting for pod pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007 to disappear
Feb 15 13:04:26.252: INFO: Pod pod-projected-secrets-a7fd1dff-4ff3-11ea-960a-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:04:26.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8nj4x" for this suite.
Feb 15 13:04:32.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:04:32.606: INFO: namespace: e2e-tests-projected-8nj4x, resource: bindings, ignored listing per whitelist
Feb 15 13:04:32.694: INFO: namespace e2e-tests-projected-8nj4x deletion completed in 6.329111402s

• [SLOW TEST:22.812 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:04:32.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Feb 15 13:04:43.276: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:05:12.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-fdvxv" for this suite.
Feb 15 13:05:18.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:05:18.299: INFO: namespace: e2e-tests-namespaces-fdvxv, resource: bindings, ignored listing per whitelist
Feb 15 13:05:18.500: INFO: namespace e2e-tests-namespaces-fdvxv deletion completed in 6.311525144s
STEP: Destroying namespace "e2e-tests-nsdeletetest-84xmt" for this suite.
Feb 15 13:05:18.507: INFO: Namespace e2e-tests-nsdeletetest-84xmt was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-xfrk2" for this suite.
Feb 15 13:05:24.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:05:24.588: INFO: namespace: e2e-tests-nsdeletetest-xfrk2, resource: bindings, ignored listing per whitelist
Feb 15 13:05:24.641: INFO: namespace e2e-tests-nsdeletetest-xfrk2 deletion completed in 6.133807097s

• [SLOW TEST:51.946 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:05:24.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 15 13:05:24.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-vg64j'
Feb 15 13:05:27.317: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 13:05:27.317: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 15 13:05:27.358: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 15 13:05:27.489: INFO: scanned /root for discovery docs: 
Feb 15 13:05:27.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-vg64j'
Feb 15 13:05:55.245: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 15 13:05:55.245: INFO: stdout: "Created e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb\nScaling up e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 15 13:05:55.245: INFO: stdout: "Created e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb\nScaling up e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 15 13:05:55.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vg64j'
Feb 15 13:05:55.472: INFO: stderr: ""
Feb 15 13:05:55.473: INFO: stdout: "e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb-h6sdd e2e-test-nginx-rc-t2gsl "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 15 13:06:00.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vg64j'
Feb 15 13:06:00.637: INFO: stderr: ""
Feb 15 13:06:00.637: INFO: stdout: "e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb-h6sdd e2e-test-nginx-rc-t2gsl "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 15 13:06:05.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vg64j'
Feb 15 13:06:05.792: INFO: stderr: ""
Feb 15 13:06:05.792: INFO: stdout: "e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb-h6sdd "
Feb 15 13:06:05.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb-h6sdd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vg64j'
Feb 15 13:06:06.022: INFO: stderr: ""
Feb 15 13:06:06.022: INFO: stdout: "true"
Feb 15 13:06:06.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb-h6sdd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vg64j'
Feb 15 13:06:06.207: INFO: stderr: ""
Feb 15 13:06:06.207: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 15 13:06:06.207: INFO: e2e-test-nginx-rc-56d0d4f99e3e824f798cf93139c39fbb-h6sdd is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb 15 13:06:06.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vg64j'
Feb 15 13:06:06.380: INFO: stderr: ""
Feb 15 13:06:06.381: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:06:06.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vg64j" for this suite.
Feb 15 13:06:32.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:06:32.865: INFO: namespace: e2e-tests-kubectl-vg64j, resource: bindings, ignored listing per whitelist
Feb 15 13:06:32.865: INFO: namespace e2e-tests-kubectl-vg64j deletion completed in 26.437744166s

• [SLOW TEST:68.224 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:06:32.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-hzlvk
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb 15 13:06:33.045: INFO: Found 0 stateful pods, waiting for 3
Feb 15 13:06:43.129: INFO: Found 1 stateful pods, waiting for 3
Feb 15 13:06:53.076: INFO: Found 2 stateful pods, waiting for 3
Feb 15 13:07:03.062: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 13:07:03.062: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 13:07:03.062: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 15 13:07:13.062: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 13:07:13.063: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 13:07:13.063: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 15 13:07:13.114: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 15 13:07:23.281: INFO: Updating stateful set ss2
Feb 15 13:07:23.427: INFO: Waiting for Pod e2e-tests-statefulset-hzlvk/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 15 13:07:37.175: INFO: Found 2 stateful pods, waiting for 3
Feb 15 13:07:47.216: INFO: Found 2 stateful pods, waiting for 3
Feb 15 13:07:57.205: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 13:07:57.205: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 13:07:57.205: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 15 13:08:07.199: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 13:08:07.199: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 13:08:07.199: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 15 13:08:07.246: INFO: Updating stateful set ss2
Feb 15 13:08:07.266: INFO: Waiting for Pod e2e-tests-statefulset-hzlvk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 15 13:08:17.743: INFO: Updating stateful set ss2
Feb 15 13:08:18.578: INFO: Waiting for StatefulSet e2e-tests-statefulset-hzlvk/ss2 to complete update
Feb 15 13:08:18.578: INFO: Waiting for Pod e2e-tests-statefulset-hzlvk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 15 13:08:28.779: INFO: Waiting for StatefulSet e2e-tests-statefulset-hzlvk/ss2 to complete update
Feb 15 13:08:28.779: INFO: Waiting for Pod e2e-tests-statefulset-hzlvk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 15 13:08:38.674: INFO: Waiting for StatefulSet e2e-tests-statefulset-hzlvk/ss2 to complete update
Feb 15 13:08:38.674: INFO: Waiting for Pod e2e-tests-statefulset-hzlvk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 15 13:08:48.672: INFO: Waiting for StatefulSet e2e-tests-statefulset-hzlvk/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 15 13:08:58.636: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hzlvk
Feb 15 13:08:58.651: INFO: Scaling statefulset ss2 to 0
Feb 15 13:09:38.731: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 13:09:38.758: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:09:38.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-hzlvk" for this suite.
Feb 15 13:09:47.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:09:47.457: INFO: namespace: e2e-tests-statefulset-hzlvk, resource: bindings, ignored listing per whitelist
Feb 15 13:09:47.676: INFO: namespace e2e-tests-statefulset-hzlvk deletion completed in 8.439969715s

• [SLOW TEST:194.811 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:09:47.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 15 13:09:47.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb 15 13:09:47.994: INFO: stderr: ""
Feb 15 13:09:47.994: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb 15 13:09:48.000: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:09:48.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9xsk8" for this suite.
Feb 15 13:09:56.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:09:56.220: INFO: namespace: e2e-tests-kubectl-9xsk8, resource: bindings, ignored listing per whitelist
Feb 15 13:09:56.223: INFO: namespace e2e-tests-kubectl-9xsk8 deletion completed in 8.212748803s

S [SKIPPING] [8.547 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb 15 13:09:48.000: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:09:56.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 15 13:09:56.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-w2v8m'
Feb 15 13:09:56.781: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 13:09:56.782: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb 15 13:09:56.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-w2v8m'
Feb 15 13:09:57.180: INFO: stderr: ""
Feb 15 13:09:57.180: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:09:57.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w2v8m" for this suite.
Feb 15 13:10:21.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:10:21.898: INFO: namespace: e2e-tests-kubectl-w2v8m, resource: bindings, ignored listing per whitelist
Feb 15 13:10:21.934: INFO: namespace e2e-tests-kubectl-w2v8m deletion completed in 24.574336663s

• [SLOW TEST:25.711 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:10:21.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 15 13:10:22.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-q2mgs'
Feb 15 13:10:22.421: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 13:10:22.421: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 15 13:10:22.651: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-4gc7b]
Feb 15 13:10:22.651: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-4gc7b" in namespace "e2e-tests-kubectl-q2mgs" to be "running and ready"
Feb 15 13:10:22.663: INFO: Pod "e2e-test-nginx-rc-4gc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.859777ms
Feb 15 13:10:24.686: INFO: Pod "e2e-test-nginx-rc-4gc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034170684s
Feb 15 13:10:26.717: INFO: Pod "e2e-test-nginx-rc-4gc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065350317s
Feb 15 13:10:29.083: INFO: Pod "e2e-test-nginx-rc-4gc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432148517s
Feb 15 13:10:31.098: INFO: Pod "e2e-test-nginx-rc-4gc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446257147s
Feb 15 13:10:33.110: INFO: Pod "e2e-test-nginx-rc-4gc7b": Phase="Running", Reason="", readiness=true. Elapsed: 10.459046547s
Feb 15 13:10:33.111: INFO: Pod "e2e-test-nginx-rc-4gc7b" satisfied condition "running and ready"
Feb 15 13:10:33.111: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-4gc7b]
Feb 15 13:10:33.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-q2mgs'
Feb 15 13:10:33.478: INFO: stderr: ""
Feb 15 13:10:33.478: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb 15 13:10:33.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-q2mgs'
Feb 15 13:10:33.649: INFO: stderr: ""
Feb 15 13:10:33.649: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:10:33.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-q2mgs" for this suite.
Feb 15 13:10:41.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:10:41.941: INFO: namespace: e2e-tests-kubectl-q2mgs, resource: bindings, ignored listing per whitelist
Feb 15 13:10:41.951: INFO: namespace e2e-tests-kubectl-q2mgs deletion completed in 8.288675327s

• [SLOW TEST:20.016 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:10:41.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb 15 13:10:42.153: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:10:42.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rd24b" for this suite.
Feb 15 13:10:48.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:10:48.490: INFO: namespace: e2e-tests-kubectl-rd24b, resource: bindings, ignored listing per whitelist
Feb 15 13:10:48.723: INFO: namespace e2e-tests-kubectl-rd24b deletion completed in 6.372850291s

• [SLOW TEST:6.772 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 15 13:10:48.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 15 13:10:48.958: INFO: Creating deployment "test-recreate-deployment"
Feb 15 13:10:48.983: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 15 13:10:49.002: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb 15 13:10:51.715: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 15 13:10:51.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 13:10:53.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 13:10:55.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 13:10:57.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 13:10:59.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717369049, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 13:11:01.736: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 15 13:11:01.763: INFO: Updating deployment test-recreate-deployment
Feb 15 13:11:01.764: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 15 13:11:02.998: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-vvz6k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vvz6k/deployments/test-recreate-deployment,UID:95a759e2-4ff4-11ea-a994-fa163e34d433,ResourceVersion:21762338,Generation:2,CreationTimestamp:2020-02-15 13:10:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-15 13:11:02 +0000 UTC 2020-02-15 13:11:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-15 13:11:02 +0000 UTC 2020-02-15 13:10:49 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 15 13:11:03.010: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-vvz6k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vvz6k/replicasets/test-recreate-deployment-589c4bfd,UID:9db4fe11-4ff4-11ea-a994-fa163e34d433,ResourceVersion:21762337,Generation:1,CreationTimestamp:2020-02-15 13:11:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 95a759e2-4ff4-11ea-a994-fa163e34d433 0xc0011508df 0xc0011508f0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 15 13:11:03.010: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 15 13:11:03.010: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-vvz6k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vvz6k/replicasets/test-recreate-deployment-5bf7f65dc,UID:95afaa5e-4ff4-11ea-a994-fa163e34d433,ResourceVersion:21762327,Generation:2,CreationTimestamp:2020-02-15 13:10:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 95a759e2-4ff4-11ea-a994-fa163e34d433 0xc001150a50 0xc001150a51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 15 13:11:03.021: INFO: Pod "test-recreate-deployment-589c4bfd-fqg9l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-fqg9l,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-vvz6k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vvz6k/pods/test-recreate-deployment-589c4bfd-fqg9l,UID:9db9212d-4ff4-11ea-a994-fa163e34d433,ResourceVersion:21762333,Generation:0,CreationTimestamp:2020-02-15 13:11:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 9db4fe11-4ff4-11ea-a994-fa163e34d433 0xc001151aaf 0xc001151ad0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xdj4l {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xdj4l,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xdj4l true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001151b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001151ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:11:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 15 13:11:03.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-vvz6k" for this suite.
Feb 15 13:11:13.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 13:11:13.171: INFO: namespace: e2e-tests-deployment-vvz6k, resource: bindings, ignored listing per whitelist
Feb 15 13:11:13.236: INFO: namespace e2e-tests-deployment-vvz6k deletion completed in 10.207437263s

• [SLOW TEST:24.513 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSFeb 15 13:11:13.237: INFO: Running AfterSuite actions on all nodes
Feb 15 13:11:13.237: INFO: Running AfterSuite actions on node 1
Feb 15 13:11:13.237: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8636.800 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS