I1224 10:47:13.987548 8 e2e.go:224] Starting e2e run "be5f1ec5-263a-11ea-b7c4-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577184433 - Will randomize all specs Will run 201 of 2164 specs Dec 24 10:47:14.396: INFO: >>> kubeConfig: /root/.kube/config Dec 24 10:47:14.408: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 24 10:47:14.440: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 24 10:47:14.506: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 24 10:47:14.506: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 24 10:47:14.506: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 24 10:47:14.576: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 24 10:47:14.576: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 24 10:47:14.576: INFO: e2e test version: v1.13.12 Dec 24 10:47:14.581: INFO: kube-apiserver version: v1.13.8 SSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 24 10:47:14.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Dec 24 10:47:14.757: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-m8tll Dec 24 10:47:24.850: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-m8tll STEP: checking the pod's current state and verifying that restartCount is present Dec 24 10:47:24.871: INFO: Initial restart count of pod liveness-http is 0 Dec 24 10:47:41.066: INFO: Restart count of pod e2e-tests-container-probe-m8tll/liveness-http is now 1 (16.19454534s elapsed) Dec 24 10:48:01.270: INFO: Restart count of pod e2e-tests-container-probe-m8tll/liveness-http is now 2 (36.398507255s elapsed) Dec 24 10:48:20.117: INFO: Restart count of pod e2e-tests-container-probe-m8tll/liveness-http is now 3 (55.246057165s elapsed) Dec 24 10:48:40.599: INFO: Restart count of pod e2e-tests-container-probe-m8tll/liveness-http is now 4 (1m15.727138856s elapsed) Dec 24 10:49:40.021: INFO: Restart count of pod e2e-tests-container-probe-m8tll/liveness-http is now 5 (2m15.149135688s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 24 10:49:40.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-m8tll" for this suite. Dec 24 10:49:46.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 10:49:46.288: INFO: namespace: e2e-tests-container-probe-m8tll, resource: bindings, ignored listing per whitelist Dec 24 10:49:46.383: INFO: namespace e2e-tests-container-probe-m8tll deletion completed in 6.306414362s • [SLOW TEST:151.802 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 24 10:49:46.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-q25pz STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 24 10:49:46.635: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 24 10:50:21.009: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-q25pz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 24 10:50:21.009: INFO: >>> kubeConfig: /root/.kube/config Dec 24 10:50:21.465: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 24 10:50:21.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-q25pz" for this suite. Dec 24 10:50:45.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 10:50:45.694: INFO: namespace: e2e-tests-pod-network-test-q25pz, resource: bindings, ignored listing per whitelist Dec 24 10:50:45.739: INFO: namespace e2e-tests-pod-network-test-q25pz deletion completed in 24.255451234s • [SLOW TEST:59.356 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 24 10:50:45.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 24 10:50:46.019: INFO: Waiting up to 5m0s for pod "pod-3d341679-263b-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-5wflf" to be "success or failure" Dec 24 10:50:46.028: INFO: Pod "pod-3d341679-263b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.819828ms Dec 24 10:50:48.037: INFO: Pod "pod-3d341679-263b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017739479s Dec 24 10:50:50.047: INFO: Pod "pod-3d341679-263b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027363074s Dec 24 10:50:52.533: INFO: Pod "pod-3d341679-263b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5136509s Dec 24 10:50:54.594: INFO: Pod "pod-3d341679-263b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.574738738s Dec 24 10:50:56.638: INFO: Pod "pod-3d341679-263b-11ea-b7c4-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.617834964s Dec 24 10:50:58.679: INFO: Pod "pod-3d341679-263b-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.659574305s STEP: Saw pod success Dec 24 10:50:58.680: INFO: Pod "pod-3d341679-263b-11ea-b7c4-0242ac110005" satisfied condition "success or failure" Dec 24 10:50:58.697: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3d341679-263b-11ea-b7c4-0242ac110005 container test-container: STEP: delete the pod Dec 24 10:50:59.010: INFO: Waiting for pod pod-3d341679-263b-11ea-b7c4-0242ac110005 to disappear Dec 24 10:50:59.017: INFO: Pod pod-3d341679-263b-11ea-b7c4-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 24 10:50:59.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5wflf" for this suite. Dec 24 10:51:05.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 10:51:05.175: INFO: namespace: e2e-tests-emptydir-5wflf, resource: bindings, ignored listing per whitelist Dec 24 10:51:05.232: INFO: namespace e2e-tests-emptydir-5wflf deletion completed in 6.209131587s • [SLOW TEST:19.493 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 24 10:51:05.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 24 10:51:05.477: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 24 10:51:05.603: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 24 10:51:10.993: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 24 10:51:15.040: INFO: Creating deployment "test-rolling-update-deployment" Dec 24 10:51:15.076: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 24 10:51:15.171: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 24 10:51:17.208: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 24 10:51:17.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 10:51:19.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 10:51:21.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 10:51:23.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712781475, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 10:51:25.228: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 24 10:51:25.250: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-9dpsg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9dpsg/deployments/test-rolling-update-deployment,UID:4e87b286-263b-11ea-a994-fa163e34d433,ResourceVersion:15889617,Generation:1,CreationTimestamp:2019-12-24 10:51:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-24 10:51:15 +0000 UTC 2019-12-24 10:51:15 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-24 10:51:24 +0000 UTC 2019-12-24 10:51:15 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 24 10:51:25.256: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-9dpsg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9dpsg/replicasets/test-rolling-update-deployment-75db98fb4c,UID:4e9b7aee-263b-11ea-a994-fa163e34d433,ResourceVersion:15889607,Generation:1,CreationTimestamp:2019-12-24 10:51:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4e87b286-263b-11ea-a994-fa163e34d433 0xc001b469a7 0xc001b469a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 24 10:51:25.256: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 24 10:51:25.257: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-9dpsg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9dpsg/replicasets/test-rolling-update-controller,UID:48d206a8-263b-11ea-a994-fa163e34d433,ResourceVersion:15889616,Generation:2,CreationTimestamp:2019-12-24 10:51:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4e87b286-263b-11ea-a994-fa163e34d433 0xc001b46897 0xc001b46898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 24 10:51:25.264: INFO: Pod "test-rolling-update-deployment-75db98fb4c-8sf2z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-8sf2z,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-9dpsg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9dpsg/pods/test-rolling-update-deployment-75db98fb4c-8sf2z,UID:4e9c5456-263b-11ea-a994-fa163e34d433,ResourceVersion:15889606,Generation:0,CreationTimestamp:2019-12-24 10:51:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 4e9b7aee-263b-11ea-a994-fa163e34d433 0xc001b47aa7 0xc001b47aa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-c5jdq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c5jdq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-c5jdq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b47b10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b47b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 10:51:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 10:51:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 10:51:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 10:51:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-24 10:51:15 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-24 10:51:22 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d5be2b78c425c6053cb97d42239f66baf66d9ce5c7e0ada3471cc81db82a103f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 24 10:51:25.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-9dpsg" for this suite. Dec 24 10:51:33.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 10:51:33.927: INFO: namespace: e2e-tests-deployment-9dpsg, resource: bindings, ignored listing per whitelist Dec 24 10:51:33.936: INFO: namespace e2e-tests-deployment-9dpsg deletion completed in 8.665550728s • [SLOW TEST:28.704 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 24 10:51:33.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-5a063beb-263b-11ea-b7c4-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-5a063e01-263b-11ea-b7c4-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5a063beb-263b-11ea-b7c4-0242ac110005 STEP: Updating configmap cm-test-opt-upd-5a063e01-263b-11ea-b7c4-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-5a063e39-263b-11ea-b7c4-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 24 10:53:18.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vlq5k" for this suite. Dec 24 10:53:42.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 10:53:42.215: INFO: namespace: e2e-tests-projected-vlq5k, resource: bindings, ignored listing per whitelist Dec 24 10:53:42.222: INFO: namespace e2e-tests-projected-vlq5k deletion completed in 24.194755806s • [SLOW TEST:128.285 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 24 10:53:42.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-vps7m Dec 24 10:53:52.487: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-vps7m STEP: checking the pod's current state and verifying that restartCount is present Dec 24 10:53:52.499: INFO: Initial restart count of pod liveness-exec is 0 Dec 24 10:54:47.289: INFO: Restart count of pod e2e-tests-container-probe-vps7m/liveness-exec is now 1 (54.790510793s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 24 10:54:47.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vps7m" for this suite. Dec 24 10:54:55.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 10:54:55.595: INFO: namespace: e2e-tests-container-probe-vps7m, resource: bindings, ignored listing per whitelist Dec 24 10:54:55.633: INFO: namespace e2e-tests-container-probe-vps7m deletion completed in 8.217418108s • [SLOW TEST:73.411 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 24 10:54:55.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Dec 24 10:54:56.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:54:58.248: INFO: stderr: "" Dec 24 10:54:58.248: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 24 10:54:58.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:54:58.554: INFO: stderr: "" Dec 24 10:54:58.554: INFO: stdout: "update-demo-nautilus-2gdfw update-demo-nautilus-w6wv9 " Dec 24 10:54:58.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gdfw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:54:58.741: INFO: stderr: "" Dec 24 10:54:58.742: INFO: stdout: "" Dec 24 10:54:58.742: INFO: update-demo-nautilus-2gdfw is created but not running Dec 24 10:55:03.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:03.933: INFO: stderr: "" Dec 24 10:55:03.933: INFO: stdout: "update-demo-nautilus-2gdfw update-demo-nautilus-w6wv9 " Dec 24 10:55:03.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gdfw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:04.152: INFO: stderr: "" Dec 24 10:55:04.152: INFO: stdout: "" Dec 24 10:55:04.152: INFO: update-demo-nautilus-2gdfw is created but not running Dec 24 10:55:09.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:09.314: INFO: stderr: "" Dec 24 10:55:09.314: INFO: stdout: "update-demo-nautilus-2gdfw update-demo-nautilus-w6wv9 " Dec 24 10:55:09.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gdfw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:09.426: INFO: stderr: "" Dec 24 10:55:09.426: INFO: stdout: "" Dec 24 10:55:09.427: INFO: update-demo-nautilus-2gdfw is created but not running Dec 24 10:55:14.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:14.616: INFO: stderr: "" Dec 24 10:55:14.616: INFO: stdout: "update-demo-nautilus-2gdfw update-demo-nautilus-w6wv9 " Dec 24 10:55:14.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gdfw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:14.751: INFO: stderr: "" Dec 24 10:55:14.751: INFO: stdout: "true" Dec 24 10:55:14.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gdfw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:14.887: INFO: stderr: "" Dec 24 10:55:14.887: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 24 10:55:14.888: INFO: validating pod update-demo-nautilus-2gdfw Dec 24 10:55:14.902: INFO: got data: { "image": "nautilus.jpg" } Dec 24 10:55:14.902: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 24 10:55:14.902: INFO: update-demo-nautilus-2gdfw is verified up and running Dec 24 10:55:14.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w6wv9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:15.038: INFO: stderr: "" Dec 24 10:55:15.038: INFO: stdout: "true" Dec 24 10:55:15.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w6wv9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:15.139: INFO: stderr: "" Dec 24 10:55:15.139: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 24 10:55:15.139: INFO: validating pod update-demo-nautilus-w6wv9 Dec 24 10:55:15.149: INFO: got data: { "image": "nautilus.jpg" } Dec 24 10:55:15.149: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 24 10:55:15.149: INFO: update-demo-nautilus-w6wv9 is verified up and running STEP: rolling-update to new replication controller Dec 24 10:55:15.151: INFO: scanned /root for discovery docs: Dec 24 10:55:15.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:50.273: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 24 10:55:50.273: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 24 10:55:50.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:50.490: INFO: stderr: "" Dec 24 10:55:50.490: INFO: stdout: "update-demo-kitten-sqsfn update-demo-kitten-txwdk update-demo-nautilus-2gdfw " STEP: Replicas for name=update-demo: expected=2 actual=3 Dec 24 10:55:55.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:55.708: INFO: stderr: "" Dec 24 10:55:55.708: INFO: stdout: "update-demo-kitten-sqsfn update-demo-kitten-txwdk " Dec 24 10:55:55.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sqsfn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:55.839: INFO: stderr: "" Dec 24 10:55:55.840: INFO: stdout: "true" Dec 24 10:55:55.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sqsfn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:55.965: INFO: stderr: "" Dec 24 10:55:55.965: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 24 10:55:55.965: INFO: validating pod update-demo-kitten-sqsfn Dec 24 10:55:55.978: INFO: got data: { "image": "kitten.jpg" } Dec 24 10:55:55.978: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 24 10:55:55.978: INFO: update-demo-kitten-sqsfn is verified up and running Dec 24 10:55:55.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-txwdk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:56.108: INFO: stderr: "" Dec 24 10:55:56.108: INFO: stdout: "true" Dec 24 10:55:56.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-txwdk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5zmvf' Dec 24 10:55:56.232: INFO: stderr: "" Dec 24 10:55:56.232: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 24 10:55:56.232: INFO: validating pod update-demo-kitten-txwdk Dec 24 10:55:56.256: INFO: got data: { "image": "kitten.jpg" } Dec 24 10:55:56.256: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 24 10:55:56.256: INFO: update-demo-kitten-txwdk is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 24 10:55:56.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5zmvf" for this suite. Dec 24 10:56:20.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 10:56:20.399: INFO: namespace: e2e-tests-kubectl-5zmvf, resource: bindings, ignored listing per whitelist Dec 24 10:56:20.470: INFO: namespace e2e-tests-kubectl-5zmvf deletion completed in 24.209128046s • [SLOW TEST:84.836 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 24 10:56:20.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 24 10:56:20.763: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-ncn5n,SelfLink:/api/v1/namespaces/e2e-tests-watch-ncn5n/configmaps/e2e-watch-test-watch-closed,UID:04b9b6e2-263c-11ea-a994-fa163e34d433,ResourceVersion:15890195,Generation:0,CreationTimestamp:2019-12-24 10:56:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 24 10:56:20.763: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-ncn5n,SelfLink:/api/v1/namespaces/e2e-tests-watch-ncn5n/configmaps/e2e-watch-test-watch-closed,UID:04b9b6e2-263c-11ea-a994-fa163e34d433,ResourceVersion:15890196,Generation:0,CreationTimestamp:2019-12-24 10:56:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 24 10:56:20.803: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-ncn5n,SelfLink:/api/v1/namespaces/e2e-tests-watch-ncn5n/configmaps/e2e-watch-test-watch-closed,UID:04b9b6e2-263c-11ea-a994-fa163e34d433,ResourceVersion:15890197,Generation:0,CreationTimestamp:2019-12-24 10:56:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 24 10:56:20.803: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-ncn5n,SelfLink:/api/v1/namespaces/e2e-tests-watch-ncn5n/configmaps/e2e-watch-test-watch-closed,UID:04b9b6e2-263c-11ea-a994-fa163e34d433,ResourceVersion:15890198,Generation:0,CreationTimestamp:2019-12-24 10:56:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 24 10:56:20.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-ncn5n" for this suite. Dec 24 10:56:26.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 10:56:26.889: INFO: namespace: e2e-tests-watch-ncn5n, resource: bindings, ignored listing per whitelist Dec 24 10:56:26.992: INFO: namespace e2e-tests-watch-ncn5n deletion completed in 6.181598874s • [SLOW TEST:6.522 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 24 10:56:26.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 24 10:56:38.525: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 24 10:56:39.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-tskzr" for this suite. Dec 24 10:57:06.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 10:57:06.671: INFO: namespace: e2e-tests-replicaset-tskzr, resource: bindings, ignored listing per whitelist Dec 24 10:57:06.822: INFO: namespace e2e-tests-replicaset-tskzr deletion completed in 26.845819822s • [SLOW TEST:39.829 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 24 10:57:06.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 24 10:57:07.150: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 33.121588ms)
Dec 24 10:57:07.186: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 35.569303ms)
Dec 24 10:57:07.269: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 82.330333ms)
Dec 24 10:57:07.280: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.878064ms)
Dec 24 10:57:07.287: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.997078ms)
Dec 24 10:57:07.291: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.515269ms)
Dec 24 10:57:07.296: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.629973ms)
Dec 24 10:57:07.299: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.639523ms)
Dec 24 10:57:07.303: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.359991ms)
Dec 24 10:57:07.306: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.404332ms)
Dec 24 10:57:07.310: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.439297ms)
Dec 24 10:57:07.313: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.471928ms)
Dec 24 10:57:07.316: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.230463ms)
Dec 24 10:57:07.320: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.224024ms)
Dec 24 10:57:07.323: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.519804ms)
Dec 24 10:57:07.327: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.503638ms)
Dec 24 10:57:07.331: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.588658ms)
Dec 24 10:57:07.336: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.702484ms)
Dec 24 10:57:07.341: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.866701ms)
Dec 24 10:57:07.351: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.061041ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 10:57:07.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-zlh8k" for this suite.
Dec 24 10:57:13.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 10:57:13.481: INFO: namespace: e2e-tests-proxy-zlh8k, resource: bindings, ignored listing per whitelist
Dec 24 10:57:13.572: INFO: namespace e2e-tests-proxy-zlh8k deletion completed in 6.217675181s

• [SLOW TEST:6.750 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 10:57:13.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 10:57:25.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-2vrg4" for this suite.
Dec 24 10:57:31.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 10:57:32.010: INFO: namespace: e2e-tests-kubelet-test-2vrg4, resource: bindings, ignored listing per whitelist
Dec 24 10:57:32.104: INFO: namespace e2e-tests-kubelet-test-2vrg4 deletion completed in 6.165823757s

• [SLOW TEST:18.531 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 10:57:32.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-2f8c96e8-263c-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 10:57:32.676: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f8d6a0f-263c-11ea-b7c4-0242ac110005" in namespace "e2e-tests-configmap-hw9rc" to be "success or failure"
Dec 24 10:57:32.684: INFO: Pod "pod-configmaps-2f8d6a0f-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.854758ms
Dec 24 10:57:34.706: INFO: Pod "pod-configmaps-2f8d6a0f-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029513536s
Dec 24 10:57:36.723: INFO: Pod "pod-configmaps-2f8d6a0f-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04680533s
Dec 24 10:57:38.972: INFO: Pod "pod-configmaps-2f8d6a0f-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296407077s
Dec 24 10:57:40.993: INFO: Pod "pod-configmaps-2f8d6a0f-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316992109s
Dec 24 10:57:43.043: INFO: Pod "pod-configmaps-2f8d6a0f-263c-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.367344393s
STEP: Saw pod success
Dec 24 10:57:43.043: INFO: Pod "pod-configmaps-2f8d6a0f-263c-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 10:57:43.054: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2f8d6a0f-263c-11ea-b7c4-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 24 10:57:43.432: INFO: Waiting for pod pod-configmaps-2f8d6a0f-263c-11ea-b7c4-0242ac110005 to disappear
Dec 24 10:57:43.456: INFO: Pod pod-configmaps-2f8d6a0f-263c-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 10:57:43.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hw9rc" for this suite.
Dec 24 10:57:51.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 10:57:51.926: INFO: namespace: e2e-tests-configmap-hw9rc, resource: bindings, ignored listing per whitelist
Dec 24 10:57:51.932: INFO: namespace e2e-tests-configmap-hw9rc deletion completed in 8.359893107s

• [SLOW TEST:19.828 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 10:57:51.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 24 10:57:52.100: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-a,UID:3b2e38fe-263c-11ea-a994-fa163e34d433,ResourceVersion:15890418,Generation:0,CreationTimestamp:2019-12-24 10:57:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 24 10:57:52.101: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-a,UID:3b2e38fe-263c-11ea-a994-fa163e34d433,ResourceVersion:15890418,Generation:0,CreationTimestamp:2019-12-24 10:57:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 24 10:58:02.124: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-a,UID:3b2e38fe-263c-11ea-a994-fa163e34d433,ResourceVersion:15890431,Generation:0,CreationTimestamp:2019-12-24 10:57:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 24 10:58:02.125: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-a,UID:3b2e38fe-263c-11ea-a994-fa163e34d433,ResourceVersion:15890431,Generation:0,CreationTimestamp:2019-12-24 10:57:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 24 10:58:12.193: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-a,UID:3b2e38fe-263c-11ea-a994-fa163e34d433,ResourceVersion:15890444,Generation:0,CreationTimestamp:2019-12-24 10:57:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 24 10:58:12.193: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-a,UID:3b2e38fe-263c-11ea-a994-fa163e34d433,ResourceVersion:15890444,Generation:0,CreationTimestamp:2019-12-24 10:57:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 24 10:58:22.215: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-a,UID:3b2e38fe-263c-11ea-a994-fa163e34d433,ResourceVersion:15890456,Generation:0,CreationTimestamp:2019-12-24 10:57:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 24 10:58:22.215: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-a,UID:3b2e38fe-263c-11ea-a994-fa163e34d433,ResourceVersion:15890456,Generation:0,CreationTimestamp:2019-12-24 10:57:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 24 10:58:32.252: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-b,UID:5318e055-263c-11ea-a994-fa163e34d433,ResourceVersion:15890469,Generation:0,CreationTimestamp:2019-12-24 10:58:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 24 10:58:32.252: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-b,UID:5318e055-263c-11ea-a994-fa163e34d433,ResourceVersion:15890469,Generation:0,CreationTimestamp:2019-12-24 10:58:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 24 10:58:42.280: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-b,UID:5318e055-263c-11ea-a994-fa163e34d433,ResourceVersion:15890482,Generation:0,CreationTimestamp:2019-12-24 10:58:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 24 10:58:42.281: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-6wgv2,SelfLink:/api/v1/namespaces/e2e-tests-watch-6wgv2/configmaps/e2e-watch-test-configmap-b,UID:5318e055-263c-11ea-a994-fa163e34d433,ResourceVersion:15890482,Generation:0,CreationTimestamp:2019-12-24 10:58:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 10:58:52.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-6wgv2" for this suite.
Dec 24 10:58:58.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 10:58:58.581: INFO: namespace: e2e-tests-watch-6wgv2, resource: bindings, ignored listing per whitelist
Dec 24 10:58:58.655: INFO: namespace e2e-tests-watch-6wgv2 deletion completed in 6.360449696s

• [SLOW TEST:66.723 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 10:58:58.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 10:59:11.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-g4dcf" for this suite.
Dec 24 10:59:17.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 10:59:17.228: INFO: namespace: e2e-tests-emptydir-wrapper-g4dcf, resource: bindings, ignored listing per whitelist
Dec 24 10:59:17.413: INFO: namespace e2e-tests-emptydir-wrapper-g4dcf deletion completed in 6.33816612s

• [SLOW TEST:18.758 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 10:59:17.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Dec 24 10:59:17.627: INFO: Waiting up to 5m0s for pod "client-containers-6e262e40-263c-11ea-b7c4-0242ac110005" in namespace "e2e-tests-containers-zcfms" to be "success or failure"
Dec 24 10:59:17.646: INFO: Pod "client-containers-6e262e40-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.037535ms
Dec 24 10:59:19.654: INFO: Pod "client-containers-6e262e40-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027628026s
Dec 24 10:59:21.669: INFO: Pod "client-containers-6e262e40-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041935419s
Dec 24 10:59:23.691: INFO: Pod "client-containers-6e262e40-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064687734s
Dec 24 10:59:25.716: INFO: Pod "client-containers-6e262e40-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088962488s
Dec 24 10:59:27.745: INFO: Pod "client-containers-6e262e40-263c-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118490416s
STEP: Saw pod success
Dec 24 10:59:27.745: INFO: Pod "client-containers-6e262e40-263c-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 10:59:27.755: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-6e262e40-263c-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 10:59:28.077: INFO: Waiting for pod client-containers-6e262e40-263c-11ea-b7c4-0242ac110005 to disappear
Dec 24 10:59:28.097: INFO: Pod client-containers-6e262e40-263c-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 10:59:28.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-zcfms" for this suite.
Dec 24 10:59:34.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 10:59:34.290: INFO: namespace: e2e-tests-containers-zcfms, resource: bindings, ignored listing per whitelist
Dec 24 10:59:34.296: INFO: namespace e2e-tests-containers-zcfms deletion completed in 6.189359939s

• [SLOW TEST:16.883 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 10:59:34.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Dec 24 10:59:34.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 24 10:59:34.946: INFO: stderr: ""
Dec 24 10:59:34.946: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 10:59:34.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lmqzb" for this suite.
Dec 24 10:59:41.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 10:59:41.248: INFO: namespace: e2e-tests-kubectl-lmqzb, resource: bindings, ignored listing per whitelist
Dec 24 10:59:41.258: INFO: namespace e2e-tests-kubectl-lmqzb deletion completed in 6.245613527s

• [SLOW TEST:6.961 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 10:59:41.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 24 10:59:41.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p459h'
Dec 24 10:59:41.868: INFO: stderr: ""
Dec 24 10:59:41.868: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 24 10:59:43.223: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 10:59:43.224: INFO: Found 0 / 1
Dec 24 10:59:43.884: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 10:59:43.884: INFO: Found 0 / 1
Dec 24 10:59:44.904: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 10:59:44.904: INFO: Found 0 / 1
Dec 24 10:59:45.886: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 10:59:45.887: INFO: Found 0 / 1
Dec 24 10:59:47.944: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 10:59:47.944: INFO: Found 0 / 1
Dec 24 10:59:49.058: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 10:59:49.058: INFO: Found 0 / 1
Dec 24 10:59:50.070: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 10:59:50.070: INFO: Found 0 / 1
Dec 24 10:59:50.884: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 10:59:50.884: INFO: Found 0 / 1
Dec 24 10:59:51.898: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 10:59:51.898: INFO: Found 1 / 1
Dec 24 10:59:51.898: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 24 10:59:51.906: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 10:59:51.906: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 24 10:59:51.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-9w52j --namespace=e2e-tests-kubectl-p459h -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 24 10:59:52.104: INFO: stderr: ""
Dec 24 10:59:52.104: INFO: stdout: "pod/redis-master-9w52j patched\n"
STEP: checking annotations
Dec 24 10:59:52.111: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 10:59:52.112: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 10:59:52.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p459h" for this suite.
Dec 24 11:00:16.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:00:16.289: INFO: namespace: e2e-tests-kubectl-p459h, resource: bindings, ignored listing per whitelist
Dec 24 11:00:16.383: INFO: namespace e2e-tests-kubectl-p459h deletion completed in 24.266207335s

• [SLOW TEST:35.125 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:00:16.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 24 11:00:16.638: INFO: Waiting up to 5m0s for pod "downward-api-9148b50e-263c-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-x8wmr" to be "success or failure"
Dec 24 11:00:16.685: INFO: Pod "downward-api-9148b50e-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.338174ms
Dec 24 11:00:18.698: INFO: Pod "downward-api-9148b50e-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059209319s
Dec 24 11:00:20.721: INFO: Pod "downward-api-9148b50e-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083063955s
Dec 24 11:00:22.738: INFO: Pod "downward-api-9148b50e-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099651673s
Dec 24 11:00:24.761: INFO: Pod "downward-api-9148b50e-263c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123054304s
Dec 24 11:00:26.860: INFO: Pod "downward-api-9148b50e-263c-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.2213702s
STEP: Saw pod success
Dec 24 11:00:26.860: INFO: Pod "downward-api-9148b50e-263c-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:00:26.874: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-9148b50e-263c-11ea-b7c4-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 24 11:00:27.195: INFO: Waiting for pod downward-api-9148b50e-263c-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:00:27.218: INFO: Pod downward-api-9148b50e-263c-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:00:27.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-x8wmr" for this suite.
Dec 24 11:00:33.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:00:33.728: INFO: namespace: e2e-tests-downward-api-x8wmr, resource: bindings, ignored listing per whitelist
Dec 24 11:00:33.749: INFO: namespace e2e-tests-downward-api-x8wmr deletion completed in 6.515804235s

• [SLOW TEST:17.366 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:00:33.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 24 11:00:54.999: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:00:55.015: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:00:57.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:00:57.028: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:00:59.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:00:59.029: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:01.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:01.029: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:03.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:03.025: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:05.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:05.030: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:07.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:07.022: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:09.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:09.033: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:11.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:11.029: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:13.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:13.044: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:15.016: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:15.029: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:17.016: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:17.059: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:19.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:19.030: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:21.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:21.036: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 11:01:23.015: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 11:01:23.029: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:01:23.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6kdlm" for this suite.
Dec 24 11:01:45.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:01:45.203: INFO: namespace: e2e-tests-container-lifecycle-hook-6kdlm, resource: bindings, ignored listing per whitelist
Dec 24 11:01:45.224: INFO: namespace e2e-tests-container-lifecycle-hook-6kdlm deletion completed in 22.143701818s

• [SLOW TEST:71.474 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:01:45.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-xzbkk
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-xzbkk
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-xzbkk
Dec 24 11:01:45.438: INFO: Found 0 stateful pods, waiting for 1
Dec 24 11:01:55.447: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Dec 24 11:02:05.462: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 24 11:02:05.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 11:02:06.264: INFO: stderr: ""
Dec 24 11:02:06.264: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 11:02:06.264: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 11:02:06.345: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 11:02:06.345: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 11:02:06.428: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 24 11:02:06.429: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  }]
Dec 24 11:02:06.429: INFO: 
Dec 24 11:02:06.429: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 24 11:02:08.501: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.965793507s
Dec 24 11:02:09.747: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.893726s
Dec 24 11:02:10.772: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.647770715s
Dec 24 11:02:11.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.622261901s
Dec 24 11:02:12.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.561143389s
Dec 24 11:02:13.917: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.548277495s
Dec 24 11:02:15.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.477052821s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-xzbkk
Dec 24 11:02:16.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:02:17.390: INFO: stderr: ""
Dec 24 11:02:17.390: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 11:02:17.390: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 11:02:17.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:02:17.867: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 24 11:02:17.868: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 11:02:17.868: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 11:02:17.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:02:18.350: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 24 11:02:18.350: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 11:02:18.350: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 11:02:18.385: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:02:18.385: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false
Dec 24 11:02:28.421: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:02:28.421: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:02:28.421: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 24 11:02:28.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 11:02:29.008: INFO: stderr: ""
Dec 24 11:02:29.008: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 11:02:29.008: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 11:02:29.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 11:02:29.669: INFO: stderr: ""
Dec 24 11:02:29.669: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 11:02:29.669: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 11:02:29.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 11:02:30.282: INFO: stderr: ""
Dec 24 11:02:30.282: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 11:02:30.282: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 11:02:30.282: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 11:02:30.354: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 24 11:02:40.406: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 11:02:40.406: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 11:02:40.406: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 11:02:40.508: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 24 11:02:40.508: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  }]
Dec 24 11:02:40.508: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:40.508: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:40.508: INFO: 
Dec 24 11:02:40.508: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 11:02:42.635: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 24 11:02:42.635: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  }]
Dec 24 11:02:42.636: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:42.636: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:42.636: INFO: 
Dec 24 11:02:42.636: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 11:02:43.659: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 24 11:02:43.659: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  }]
Dec 24 11:02:43.659: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:43.659: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:43.659: INFO: 
Dec 24 11:02:43.659: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 11:02:44.926: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 24 11:02:44.926: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  }]
Dec 24 11:02:44.927: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:44.927: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:44.927: INFO: 
Dec 24 11:02:44.927: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 11:02:46.573: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 24 11:02:46.573: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  }]
Dec 24 11:02:46.573: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:46.573: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:46.573: INFO: 
Dec 24 11:02:46.573: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 11:02:47.688: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 24 11:02:47.688: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  }]
Dec 24 11:02:47.688: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:47.688: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:47.688: INFO: 
Dec 24 11:02:47.688: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 11:02:48.763: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 24 11:02:48.763: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:01:45 +0000 UTC  }]
Dec 24 11:02:48.764: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:48.764: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:48.764: INFO: 
Dec 24 11:02:48.764: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 11:02:49.790: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 24 11:02:49.790: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:49.790: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:02:06 +0000 UTC  }]
Dec 24 11:02:49.790: INFO: 
Dec 24 11:02:49.790: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-xzbkk
Dec 24 11:02:50.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:02:51.033: INFO: rc: 1
Dec 24 11:02:51.033: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001bc5fb0 exit status 1   true [0xc00000ec08 0xc00000ec78 0xc00000ecb8] [0xc00000ec08 0xc00000ec78 0xc00000ecb8] [0xc00000ec70 0xc00000eca0] [0x935700 0x935700] 0xc001104ba0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 24 11:03:01.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:03:01.191: INFO: rc: 1
Dec 24 11:03:01.192: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0012120f0 exit status 1   true [0xc00000ed28 0xc00000edf0 0xc00000ee50] [0xc00000ed28 0xc00000edf0 0xc00000ee50] [0xc00000edd8 0xc00000ee28] [0x935700 0x935700] 0xc001104e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:03:11.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:03:11.417: INFO: rc: 1
Dec 24 11:03:11.418: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b87020 exit status 1   true [0xc000428728 0xc000428790 0xc0004287d8] [0xc000428728 0xc000428790 0xc0004287d8] [0xc000428778 0xc0004287c0] [0x935700 0x935700] 0xc000e69a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:03:21.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:03:21.564: INFO: rc: 1
Dec 24 11:03:21.565: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001212240 exit status 1   true [0xc00000ee90 0xc00000f010 0xc00000f138] [0xc00000ee90 0xc00000f010 0xc00000f138] [0xc00000ef78 0xc00000f120] [0x935700 0x935700] 0xc0011050e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:03:31.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:03:31.780: INFO: rc: 1
Dec 24 11:03:31.780: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b871a0 exit status 1   true [0xc000428850 0xc0004288d0 0xc0004289b0] [0xc000428850 0xc0004288d0 0xc0004289b0] [0xc0004288c0 0xc000428980] [0x935700 0x935700] 0xc000e69d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:03:41.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:03:41.928: INFO: rc: 1
Dec 24 11:03:41.929: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b872f0 exit status 1   true [0xc0004289e8 0xc000428a60 0xc000428b10] [0xc0004289e8 0xc000428a60 0xc000428b10] [0xc000428a28 0xc000428af8] [0x935700 0x935700] 0xc0018e0120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:03:51.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:03:52.127: INFO: rc: 1
Dec 24 11:03:52.128: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0010a3d10 exit status 1   true [0xc000dea0c8 0xc000dea0e0 0xc000dea0f8] [0xc000dea0c8 0xc000dea0e0 0xc000dea0f8] [0xc000dea0d8 0xc000dea0f0] [0x935700 0x935700] 0xc0012c77a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:04:02.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:04:02.269: INFO: rc: 1
Dec 24 11:04:02.270: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0010a3e30 exit status 1   true [0xc000dea100 0xc000dea118 0xc000dea130] [0xc000dea100 0xc000dea118 0xc000dea130] [0xc000dea110 0xc000dea128] [0x935700 0x935700] 0xc0012c7c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:04:12.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:04:12.427: INFO: rc: 1
Dec 24 11:04:12.428: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0012123f0 exit status 1   true [0xc00000f1e8 0xc00000f358 0xc00000f430] [0xc00000f1e8 0xc00000f358 0xc00000f430] [0xc00000f2f8 0xc00000f428] [0x935700 0x935700] 0xc00170ac60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:04:22.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:04:22.603: INFO: rc: 1
Dec 24 11:04:22.604: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b87410 exit status 1   true [0xc000428b18 0xc000428ba0 0xc000428c38] [0xc000428b18 0xc000428ba0 0xc000428c38] [0xc000428b98 0xc000428bc8] [0x935700 0x935700] 0xc0018e06c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:04:32.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:04:32.770: INFO: rc: 1
Dec 24 11:04:32.771: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001212570 exit status 1   true [0xc00000f450 0xc00000f4f0 0xc00000f550] [0xc00000f450 0xc00000f4f0 0xc00000f550] [0xc00000f4a8 0xc00000f538] [0x935700 0x935700] 0xc00170b200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:04:42.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:04:42.948: INFO: rc: 1
Dec 24 11:04:42.948: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001bc4120 exit status 1   true [0xc0000ea308 0xc0004280d0 0xc000428188] [0xc0000ea308 0xc0004280d0 0xc000428188] [0xc0004280b0 0xc000428170] [0x935700 0x935700] 0xc001a284e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:04:52.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:04:53.075: INFO: rc: 1
Dec 24 11:04:53.075: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001212150 exit status 1   true [0xc00000e010 0xc00000ec08 0xc00000ec78] [0xc00000e010 0xc00000ec08 0xc00000ec78] [0xc00000ebf8 0xc00000ec70] [0x935700 0x935700] 0xc0011041e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:05:03.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:05:03.235: INFO: rc: 1
Dec 24 11:05:03.236: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b86480 exit status 1   true [0xc000dea000 0xc000dea018 0xc000dea030] [0xc000dea000 0xc000dea018 0xc000dea030] [0xc000dea010 0xc000dea028] [0x935700 0x935700] 0xc000e68720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:05:13.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:05:13.446: INFO: rc: 1
Dec 24 11:05:13.446: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001bc42d0 exit status 1   true [0xc0004281a0 0xc0004282a8 0xc000428378] [0xc0004281a0 0xc0004282a8 0xc000428378] [0xc0004281e8 0xc000428350] [0x935700 0x935700] 0xc001a28cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:05:23.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:05:23.611: INFO: rc: 1
Dec 24 11:05:23.612: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001bc43f0 exit status 1   true [0xc000428390 0xc000428438 0xc000428508] [0xc000428390 0xc000428438 0xc000428508] [0xc000428418 0xc0004284b0] [0x935700 0x935700] 0xc001a29c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:05:33.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:05:33.781: INFO: rc: 1
Dec 24 11:05:33.781: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001bc4540 exit status 1   true [0xc000428528 0xc0004285c0 0xc0004285f0] [0xc000428528 0xc0004285c0 0xc0004285f0] [0xc0004285b0 0xc0004285e0] [0x935700 0x935700] 0xc00170aae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:05:43.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:05:43.975: INFO: rc: 1
Dec 24 11:05:43.976: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001bc4660 exit status 1   true [0xc000428628 0xc000428658 0xc0004286e8] [0xc000428628 0xc000428658 0xc0004286e8] [0xc000428648 0xc0004286b8] [0x935700 0x935700] 0xc00170b020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:05:53.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:05:54.092: INFO: rc: 1
Dec 24 11:05:54.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001212330 exit status 1   true [0xc00000ec88 0xc00000ed28 0xc00000edf0] [0xc00000ec88 0xc00000ed28 0xc00000edf0] [0xc00000ecb8 0xc00000edd8] [0x935700 0x935700] 0xc001104480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:06:04.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:06:04.235: INFO: rc: 1
Dec 24 11:06:04.235: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b865d0 exit status 1   true [0xc000dea038 0xc000dea050 0xc000dea068] [0xc000dea038 0xc000dea050 0xc000dea068] [0xc000dea048 0xc000dea060] [0x935700 0x935700] 0xc000e69560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:06:14.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:06:14.466: INFO: rc: 1
Dec 24 11:06:14.466: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b86720 exit status 1   true [0xc000dea070 0xc000dea088 0xc000dea0a0] [0xc000dea070 0xc000dea088 0xc000dea0a0] [0xc000dea080 0xc000dea098] [0x935700 0x935700] 0xc000e69b60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:06:24.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:06:24.694: INFO: rc: 1
Dec 24 11:06:24.695: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001bc4840 exit status 1   true [0xc000428700 0xc000428740 0xc0004287a0] [0xc000428700 0xc000428740 0xc0004287a0] [0xc000428728 0xc000428790] [0x935700 0x935700] 0xc00170bb60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:06:34.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:06:34.809: INFO: rc: 1
Dec 24 11:06:34.809: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001bc4960 exit status 1   true [0xc0004287c0 0xc0004288b8 0xc000428900] [0xc0004287c0 0xc0004288b8 0xc000428900] [0xc000428850 0xc0004288d0] [0x935700 0x935700] 0xc0018e05a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:06:44.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:06:44.920: INFO: rc: 1
Dec 24 11:06:44.921: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001212120 exit status 1   true [0xc0000ea308 0xc0004280d0 0xc000428188] [0xc0000ea308 0xc0004280d0 0xc000428188] [0xc0004280b0 0xc000428170] [0x935700 0x935700] 0xc001a28780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:06:54.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:06:55.050: INFO: rc: 1
Dec 24 11:06:55.050: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001bc4150 exit status 1   true [0xc000dea000 0xc000dea018 0xc000dea030] [0xc000dea000 0xc000dea018 0xc000dea030] [0xc000dea010 0xc000dea028] [0x935700 0x935700] 0xc00170ade0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:07:05.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:07:05.202: INFO: rc: 1
Dec 24 11:07:05.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001bc4330 exit status 1   true [0xc000dea038 0xc000dea050 0xc000dea068] [0xc000dea038 0xc000dea050 0xc000dea068] [0xc000dea048 0xc000dea060] [0x935700 0x935700] 0xc00170b860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:07:15.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:07:15.360: INFO: rc: 1
Dec 24 11:07:15.361: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001bc44b0 exit status 1   true [0xc000dea070 0xc000dea088 0xc000dea0a0] [0xc000dea070 0xc000dea088 0xc000dea0a0] [0xc000dea080 0xc000dea098] [0x935700 0x935700] 0xc001104120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:07:25.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:07:25.520: INFO: rc: 1
Dec 24 11:07:25.520: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0010d0120 exit status 1   true [0xc00000e010 0xc00000ec08 0xc00000ec78] [0xc00000e010 0xc00000ec08 0xc00000ec78] [0xc00000ebf8 0xc00000ec70] [0x935700 0x935700] 0xc0018e0420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:07:35.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:07:35.680: INFO: rc: 1
Dec 24 11:07:35.680: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001bc4600 exit status 1   true [0xc000dea0a8 0xc000dea0c0 0xc000dea0d8] [0xc000dea0a8 0xc000dea0c0 0xc000dea0d8] [0xc000dea0b8 0xc000dea0d0] [0x935700 0x935700] 0xc0011043c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:07:45.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:07:45.876: INFO: rc: 1
Dec 24 11:07:45.876: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b86240 exit status 1   true [0xc000c32000 0xc000c32018 0xc000c32030] [0xc000c32000 0xc000c32018 0xc000c32030] [0xc000c32010 0xc000c32028] [0x935700 0x935700] 0xc000e68720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 24 11:07:55.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xzbkk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:07:56.019: INFO: rc: 1
Dec 24 11:07:56.019: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Dec 24 11:07:56.019: INFO: Scaling statefulset ss to 0
Dec 24 11:07:56.038: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 24 11:07:56.042: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xzbkk
Dec 24 11:07:56.045: INFO: Scaling statefulset ss to 0
Dec 24 11:07:56.054: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 11:07:56.058: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:07:56.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-xzbkk" for this suite.
Dec 24 11:08:04.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:08:04.343: INFO: namespace: e2e-tests-statefulset-xzbkk, resource: bindings, ignored listing per whitelist
Dec 24 11:08:04.346: INFO: namespace e2e-tests-statefulset-xzbkk deletion completed in 8.132096396s

• [SLOW TEST:379.122 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:08:04.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 11:08:04.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-fhjg4'
Dec 24 11:08:06.563: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 24 11:08:06.563: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 24 11:08:06.688: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-769pp]
Dec 24 11:08:06.688: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-769pp" in namespace "e2e-tests-kubectl-fhjg4" to be "running and ready"
Dec 24 11:08:06.751: INFO: Pod "e2e-test-nginx-rc-769pp": Phase="Pending", Reason="", readiness=false. Elapsed: 63.303244ms
Dec 24 11:08:08.776: INFO: Pod "e2e-test-nginx-rc-769pp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087420483s
Dec 24 11:08:10.793: INFO: Pod "e2e-test-nginx-rc-769pp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104593508s
Dec 24 11:08:12.996: INFO: Pod "e2e-test-nginx-rc-769pp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307653174s
Dec 24 11:08:15.011: INFO: Pod "e2e-test-nginx-rc-769pp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.323226159s
Dec 24 11:08:17.042: INFO: Pod "e2e-test-nginx-rc-769pp": Phase="Running", Reason="", readiness=true. Elapsed: 10.353993139s
Dec 24 11:08:17.042: INFO: Pod "e2e-test-nginx-rc-769pp" satisfied condition "running and ready"
Dec 24 11:08:17.042: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-769pp]
Dec 24 11:08:17.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fhjg4'
Dec 24 11:08:17.289: INFO: stderr: ""
Dec 24 11:08:17.289: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Dec 24 11:08:17.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fhjg4'
Dec 24 11:08:17.438: INFO: stderr: ""
Dec 24 11:08:17.438: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:08:17.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fhjg4" for this suite.
Dec 24 11:08:41.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:08:41.558: INFO: namespace: e2e-tests-kubectl-fhjg4, resource: bindings, ignored listing per whitelist
Dec 24 11:08:41.719: INFO: namespace e2e-tests-kubectl-fhjg4 deletion completed in 24.237005057s

• [SLOW TEST:37.373 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:08:41.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 24 11:08:42.285: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 24 11:08:47.303: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:08:48.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-4q2t2" for this suite.
Dec 24 11:08:55.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:08:55.400: INFO: namespace: e2e-tests-replication-controller-4q2t2, resource: bindings, ignored listing per whitelist
Dec 24 11:08:55.502: INFO: namespace e2e-tests-replication-controller-4q2t2 deletion completed in 7.020110551s

• [SLOW TEST:13.782 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:08:55.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-c81dba57-263d-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 11:08:58.160: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-85lwj" to be "success or failure"
Dec 24 11:08:58.256: INFO: Pod "pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 96.282908ms
Dec 24 11:09:00.591: INFO: Pod "pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431722358s
Dec 24 11:09:02.603: INFO: Pod "pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443344219s
Dec 24 11:09:05.008: INFO: Pod "pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.848477543s
Dec 24 11:09:07.023: INFO: Pod "pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.863349608s
Dec 24 11:09:09.036: INFO: Pod "pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.87646653s
Dec 24 11:09:11.162: INFO: Pod "pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.002188308s
STEP: Saw pod success
Dec 24 11:09:11.162: INFO: Pod "pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:09:11.169: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 24 11:09:11.678: INFO: Waiting for pod pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:09:11.702: INFO: Pod pod-projected-configmaps-c823fb25-263d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:09:11.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-85lwj" for this suite.
Dec 24 11:09:17.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:09:17.945: INFO: namespace: e2e-tests-projected-85lwj, resource: bindings, ignored listing per whitelist
Dec 24 11:09:17.958: INFO: namespace e2e-tests-projected-85lwj deletion completed in 6.235859818s

• [SLOW TEST:22.455 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:09:17.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 11:09:18.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4237d3d-263d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-84pbg" to be "success or failure"
Dec 24 11:09:18.392: INFO: Pod "downwardapi-volume-d4237d3d-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 154.566688ms
Dec 24 11:09:20.410: INFO: Pod "downwardapi-volume-d4237d3d-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172818053s
Dec 24 11:09:22.438: INFO: Pod "downwardapi-volume-d4237d3d-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200855282s
Dec 24 11:09:24.459: INFO: Pod "downwardapi-volume-d4237d3d-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221616407s
Dec 24 11:09:26.500: INFO: Pod "downwardapi-volume-d4237d3d-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262238222s
Dec 24 11:09:28.531: INFO: Pod "downwardapi-volume-d4237d3d-263d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.293506153s
STEP: Saw pod success
Dec 24 11:09:28.531: INFO: Pod "downwardapi-volume-d4237d3d-263d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:09:28.580: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d4237d3d-263d-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 11:09:29.320: INFO: Waiting for pod downwardapi-volume-d4237d3d-263d-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:09:29.643: INFO: Pod downwardapi-volume-d4237d3d-263d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:09:29.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-84pbg" for this suite.
Dec 24 11:09:35.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:09:35.856: INFO: namespace: e2e-tests-projected-84pbg, resource: bindings, ignored listing per whitelist
Dec 24 11:09:35.932: INFO: namespace e2e-tests-projected-84pbg deletion completed in 6.273045575s

• [SLOW TEST:17.974 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:09:35.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 24 11:09:36.168: INFO: Waiting up to 5m0s for pod "pod-ded1b6c2-263d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-qhsdd" to be "success or failure"
Dec 24 11:09:36.257: INFO: Pod "pod-ded1b6c2-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 89.259121ms
Dec 24 11:09:38.299: INFO: Pod "pod-ded1b6c2-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130667746s
Dec 24 11:09:40.491: INFO: Pod "pod-ded1b6c2-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323436715s
Dec 24 11:09:42.558: INFO: Pod "pod-ded1b6c2-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389796546s
Dec 24 11:09:44.780: INFO: Pod "pod-ded1b6c2-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.611609693s
Dec 24 11:09:46.928: INFO: Pod "pod-ded1b6c2-263d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.759985853s
STEP: Saw pod success
Dec 24 11:09:46.928: INFO: Pod "pod-ded1b6c2-263d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:09:46.943: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ded1b6c2-263d-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 11:09:47.116: INFO: Waiting for pod pod-ded1b6c2-263d-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:09:47.150: INFO: Pod pod-ded1b6c2-263d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:09:47.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qhsdd" for this suite.
Dec 24 11:09:53.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:09:53.461: INFO: namespace: e2e-tests-emptydir-qhsdd, resource: bindings, ignored listing per whitelist
Dec 24 11:09:53.465: INFO: namespace e2e-tests-emptydir-qhsdd deletion completed in 6.292311567s

• [SLOW TEST:17.533 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:09:53.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-e936abb5-263d-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 11:09:53.735: INFO: Waiting up to 5m0s for pod "pod-secrets-e94c5776-263d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-secrets-zw6zw" to be "success or failure"
Dec 24 11:09:53.759: INFO: Pod "pod-secrets-e94c5776-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.78963ms
Dec 24 11:09:56.182: INFO: Pod "pod-secrets-e94c5776-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.446712403s
Dec 24 11:09:58.199: INFO: Pod "pod-secrets-e94c5776-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.463487816s
Dec 24 11:10:00.218: INFO: Pod "pod-secrets-e94c5776-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.482370322s
Dec 24 11:10:02.245: INFO: Pod "pod-secrets-e94c5776-263d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.509116564s
Dec 24 11:10:04.277: INFO: Pod "pod-secrets-e94c5776-263d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.541402429s
STEP: Saw pod success
Dec 24 11:10:04.277: INFO: Pod "pod-secrets-e94c5776-263d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:10:04.294: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e94c5776-263d-11ea-b7c4-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 24 11:10:04.608: INFO: Waiting for pod pod-secrets-e94c5776-263d-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:10:04.733: INFO: Pod pod-secrets-e94c5776-263d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:10:04.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zw6zw" for this suite.
Dec 24 11:10:10.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:10:11.058: INFO: namespace: e2e-tests-secrets-zw6zw, resource: bindings, ignored listing per whitelist
Dec 24 11:10:11.090: INFO: namespace e2e-tests-secrets-zw6zw deletion completed in 6.330863536s

• [SLOW TEST:17.625 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:10:11.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-fm5t
STEP: Creating a pod to test atomic-volume-subpath
Dec 24 11:10:11.352: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fm5t" in namespace "e2e-tests-subpath-zngb6" to be "success or failure"
Dec 24 11:10:11.430: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 78.153817ms
Dec 24 11:10:13.561: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209492975s
Dec 24 11:10:15.574: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221893286s
Dec 24 11:10:17.979: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.627242209s
Dec 24 11:10:20.232: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.879681265s
Dec 24 11:10:22.250: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.898055037s
Dec 24 11:10:24.272: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 12.919826099s
Dec 24 11:10:26.286: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 14.933657211s
Dec 24 11:10:28.311: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Running", Reason="", readiness=false. Elapsed: 16.959121811s
Dec 24 11:10:30.328: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Running", Reason="", readiness=false. Elapsed: 18.976217139s
Dec 24 11:10:32.345: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Running", Reason="", readiness=false. Elapsed: 20.993464331s
Dec 24 11:10:34.364: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Running", Reason="", readiness=false. Elapsed: 23.012512646s
Dec 24 11:10:36.378: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Running", Reason="", readiness=false. Elapsed: 25.026415586s
Dec 24 11:10:38.391: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Running", Reason="", readiness=false. Elapsed: 27.039175362s
Dec 24 11:10:40.412: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Running", Reason="", readiness=false. Elapsed: 29.059759986s
Dec 24 11:10:42.429: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Running", Reason="", readiness=false. Elapsed: 31.077518047s
Dec 24 11:10:44.452: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Running", Reason="", readiness=false. Elapsed: 33.099988555s
Dec 24 11:10:46.480: INFO: Pod "pod-subpath-test-downwardapi-fm5t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.128372613s
STEP: Saw pod success
Dec 24 11:10:46.480: INFO: Pod "pod-subpath-test-downwardapi-fm5t" satisfied condition "success or failure"
Dec 24 11:10:46.489: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-fm5t container test-container-subpath-downwardapi-fm5t: 
STEP: delete the pod
Dec 24 11:10:47.115: INFO: Waiting for pod pod-subpath-test-downwardapi-fm5t to disappear
Dec 24 11:10:47.714: INFO: Pod pod-subpath-test-downwardapi-fm5t no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-fm5t
Dec 24 11:10:47.714: INFO: Deleting pod "pod-subpath-test-downwardapi-fm5t" in namespace "e2e-tests-subpath-zngb6"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:10:47.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-zngb6" for this suite.
Dec 24 11:10:55.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:10:56.099: INFO: namespace: e2e-tests-subpath-zngb6, resource: bindings, ignored listing per whitelist
Dec 24 11:10:56.226: INFO: namespace e2e-tests-subpath-zngb6 deletion completed in 8.447117747s

• [SLOW TEST:45.136 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:10:56.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 11:10:56.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Dec 24 11:10:56.728: INFO: stderr: ""
Dec 24 11:10:56.728: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Dec 24 11:10:56.745: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:10:56.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-l5mbj" for this suite.
Dec 24 11:11:02.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:11:02.990: INFO: namespace: e2e-tests-kubectl-l5mbj, resource: bindings, ignored listing per whitelist
Dec 24 11:11:02.990: INFO: namespace e2e-tests-kubectl-l5mbj deletion completed in 6.215984229s

S [SKIPPING] [6.763 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Dec 24 11:10:56.745: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:11:02.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-zxts2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zxts2 to expose endpoints map[]
Dec 24 11:11:03.282: INFO: Get endpoints failed (84.417125ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 24 11:11:04.287: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zxts2 exposes endpoints map[] (1.089878613s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-zxts2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zxts2 to expose endpoints map[pod1:[80]]
Dec 24 11:11:08.847: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.544590095s elapsed, will retry)
Dec 24 11:11:14.121: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zxts2 exposes endpoints map[pod1:[80]] (9.818402429s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-zxts2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zxts2 to expose endpoints map[pod2:[80] pod1:[80]]
Dec 24 11:11:18.721: INFO: Unexpected endpoints: found map[135deaf2-263e-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.588374852s elapsed, will retry)
Dec 24 11:11:23.108: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zxts2 exposes endpoints map[pod2:[80] pod1:[80]] (8.975518392s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-zxts2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zxts2 to expose endpoints map[pod2:[80]]
Dec 24 11:11:24.348: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zxts2 exposes endpoints map[pod2:[80]] (1.225751594s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-zxts2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zxts2 to expose endpoints map[]
Dec 24 11:11:24.494: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zxts2 exposes endpoints map[] (60.636145ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:11:24.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-zxts2" for this suite.
Dec 24 11:11:49.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:11:49.277: INFO: namespace: e2e-tests-services-zxts2, resource: bindings, ignored listing per whitelist
Dec 24 11:11:49.408: INFO: namespace e2e-tests-services-zxts2 deletion completed in 24.362770519s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:46.417 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:11:49.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 24 11:11:49.610: INFO: Waiting up to 5m0s for pod "pod-2e5c19c1-263e-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-jj48h" to be "success or failure"
Dec 24 11:11:49.619: INFO: Pod "pod-2e5c19c1-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.599654ms
Dec 24 11:11:51.634: INFO: Pod "pod-2e5c19c1-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023521134s
Dec 24 11:11:53.647: INFO: Pod "pod-2e5c19c1-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036539744s
Dec 24 11:11:55.764: INFO: Pod "pod-2e5c19c1-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153581934s
Dec 24 11:11:57.773: INFO: Pod "pod-2e5c19c1-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162842727s
Dec 24 11:11:59.851: INFO: Pod "pod-2e5c19c1-263e-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.240682797s
STEP: Saw pod success
Dec 24 11:11:59.851: INFO: Pod "pod-2e5c19c1-263e-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:11:59.875: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2e5c19c1-263e-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 11:12:00.070: INFO: Waiting for pod pod-2e5c19c1-263e-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:12:00.078: INFO: Pod pod-2e5c19c1-263e-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:12:00.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jj48h" for this suite.
Dec 24 11:12:06.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:12:06.200: INFO: namespace: e2e-tests-emptydir-jj48h, resource: bindings, ignored listing per whitelist
Dec 24 11:12:06.302: INFO: namespace e2e-tests-emptydir-jj48h deletion completed in 6.212667767s

• [SLOW TEST:16.894 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:12:06.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 24 11:12:16.845: INFO: Pod pod-hostip-3887a921-263e-11ea-b7c4-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:12:16.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4rnxq" for this suite.
Dec 24 11:12:40.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:12:40.985: INFO: namespace: e2e-tests-pods-4rnxq, resource: bindings, ignored listing per whitelist
Dec 24 11:12:41.106: INFO: namespace e2e-tests-pods-4rnxq deletion completed in 24.251223251s

• [SLOW TEST:34.804 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:12:41.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 24 11:12:52.081: INFO: Successfully updated pod "labelsupdate4d3a1d51-263e-11ea-b7c4-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:12:54.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fh4qh" for this suite.
Dec 24 11:13:18.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:13:18.347: INFO: namespace: e2e-tests-downward-api-fh4qh, resource: bindings, ignored listing per whitelist
Dec 24 11:13:18.640: INFO: namespace e2e-tests-downward-api-fh4qh deletion completed in 24.479742584s

• [SLOW TEST:37.534 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:13:18.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:13:19.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-mk5wf" for this suite.
Dec 24 11:13:25.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:13:25.256: INFO: namespace: e2e-tests-kubelet-test-mk5wf, resource: bindings, ignored listing per whitelist
Dec 24 11:13:25.338: INFO: namespace e2e-tests-kubelet-test-mk5wf deletion completed in 6.294706363s

• [SLOW TEST:6.698 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:13:25.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-678e7f13-263e-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 11:13:25.567: INFO: Waiting up to 5m0s for pod "pod-secrets-678f6a32-263e-11ea-b7c4-0242ac110005" in namespace "e2e-tests-secrets-rphr6" to be "success or failure"
Dec 24 11:13:25.576: INFO: Pod "pod-secrets-678f6a32-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.952423ms
Dec 24 11:13:27.598: INFO: Pod "pod-secrets-678f6a32-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030291328s
Dec 24 11:13:29.609: INFO: Pod "pod-secrets-678f6a32-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041476408s
Dec 24 11:13:31.656: INFO: Pod "pod-secrets-678f6a32-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088700505s
Dec 24 11:13:33.791: INFO: Pod "pod-secrets-678f6a32-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223873674s
Dec 24 11:13:35.803: INFO: Pod "pod-secrets-678f6a32-263e-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.235146967s
STEP: Saw pod success
Dec 24 11:13:35.803: INFO: Pod "pod-secrets-678f6a32-263e-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:13:35.806: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-678f6a32-263e-11ea-b7c4-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 24 11:13:36.010: INFO: Waiting for pod pod-secrets-678f6a32-263e-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:13:36.236: INFO: Pod pod-secrets-678f6a32-263e-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:13:36.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rphr6" for this suite.
Dec 24 11:13:42.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:13:42.694: INFO: namespace: e2e-tests-secrets-rphr6, resource: bindings, ignored listing per whitelist
Dec 24 11:13:42.727: INFO: namespace e2e-tests-secrets-rphr6 deletion completed in 6.477788525s

• [SLOW TEST:17.388 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:13:42.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 24 11:13:42.973: INFO: Waiting up to 5m0s for pod "pod-71ef6918-263e-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-8kxln" to be "success or failure"
Dec 24 11:13:42.998: INFO: Pod "pod-71ef6918-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.331246ms
Dec 24 11:13:45.421: INFO: Pod "pod-71ef6918-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447636979s
Dec 24 11:13:47.559: INFO: Pod "pod-71ef6918-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.58572359s
Dec 24 11:13:49.761: INFO: Pod "pod-71ef6918-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.788078578s
Dec 24 11:13:51.780: INFO: Pod "pod-71ef6918-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.806918532s
Dec 24 11:13:53.794: INFO: Pod "pod-71ef6918-263e-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.820624051s
STEP: Saw pod success
Dec 24 11:13:53.794: INFO: Pod "pod-71ef6918-263e-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:13:53.800: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-71ef6918-263e-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 11:13:54.581: INFO: Waiting for pod pod-71ef6918-263e-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:13:54.600: INFO: Pod pod-71ef6918-263e-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:13:54.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8kxln" for this suite.
Dec 24 11:14:00.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:14:00.921: INFO: namespace: e2e-tests-emptydir-8kxln, resource: bindings, ignored listing per whitelist
Dec 24 11:14:00.926: INFO: namespace e2e-tests-emptydir-8kxln deletion completed in 6.307613856s

• [SLOW TEST:18.199 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:14:00.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-7cb9d10e-263e-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 11:14:01.148: INFO: Waiting up to 5m0s for pod "pod-secrets-7cc4f779-263e-11ea-b7c4-0242ac110005" in namespace "e2e-tests-secrets-4t56r" to be "success or failure"
Dec 24 11:14:01.164: INFO: Pod "pod-secrets-7cc4f779-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.956627ms
Dec 24 11:14:03.207: INFO: Pod "pod-secrets-7cc4f779-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058241061s
Dec 24 11:14:05.273: INFO: Pod "pod-secrets-7cc4f779-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125002084s
Dec 24 11:14:07.512: INFO: Pod "pod-secrets-7cc4f779-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.363703341s
Dec 24 11:14:09.543: INFO: Pod "pod-secrets-7cc4f779-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.395044289s
Dec 24 11:14:11.586: INFO: Pod "pod-secrets-7cc4f779-263e-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.437116874s
STEP: Saw pod success
Dec 24 11:14:11.586: INFO: Pod "pod-secrets-7cc4f779-263e-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:14:11.597: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7cc4f779-263e-11ea-b7c4-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 24 11:14:11.696: INFO: Waiting for pod pod-secrets-7cc4f779-263e-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:14:11.705: INFO: Pod pod-secrets-7cc4f779-263e-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:14:11.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4t56r" for this suite.
Dec 24 11:14:17.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:14:17.955: INFO: namespace: e2e-tests-secrets-4t56r, resource: bindings, ignored listing per whitelist
Dec 24 11:14:18.051: INFO: namespace e2e-tests-secrets-4t56r deletion completed in 6.336005919s

• [SLOW TEST:17.125 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:14:18.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 11:14:18.297: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86fcce00-263e-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-7skst" to be "success or failure"
Dec 24 11:14:18.312: INFO: Pod "downwardapi-volume-86fcce00-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.441917ms
Dec 24 11:14:20.348: INFO: Pod "downwardapi-volume-86fcce00-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05023828s
Dec 24 11:14:22.363: INFO: Pod "downwardapi-volume-86fcce00-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065591987s
Dec 24 11:14:24.574: INFO: Pod "downwardapi-volume-86fcce00-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276463006s
Dec 24 11:14:26.769: INFO: Pod "downwardapi-volume-86fcce00-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.472067915s
Dec 24 11:14:28.971: INFO: Pod "downwardapi-volume-86fcce00-263e-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.673764866s
STEP: Saw pod success
Dec 24 11:14:28.971: INFO: Pod "downwardapi-volume-86fcce00-263e-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:14:28.979: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-86fcce00-263e-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 11:14:29.273: INFO: Waiting for pod downwardapi-volume-86fcce00-263e-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:14:29.326: INFO: Pod downwardapi-volume-86fcce00-263e-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:14:29.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7skst" for this suite.
Dec 24 11:14:35.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:14:35.709: INFO: namespace: e2e-tests-projected-7skst, resource: bindings, ignored listing per whitelist
Dec 24 11:14:35.778: INFO: namespace e2e-tests-projected-7skst deletion completed in 6.406818476s

• [SLOW TEST:17.726 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:14:35.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 24 11:14:36.001: INFO: Waiting up to 5m0s for pod "var-expansion-918bba25-263e-11ea-b7c4-0242ac110005" in namespace "e2e-tests-var-expansion-rvncn" to be "success or failure"
Dec 24 11:14:36.011: INFO: Pod "var-expansion-918bba25-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.226245ms
Dec 24 11:14:38.030: INFO: Pod "var-expansion-918bba25-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029017547s
Dec 24 11:14:40.042: INFO: Pod "var-expansion-918bba25-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041142294s
Dec 24 11:14:42.190: INFO: Pod "var-expansion-918bba25-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189576618s
Dec 24 11:14:44.403: INFO: Pod "var-expansion-918bba25-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.401926483s
Dec 24 11:14:46.512: INFO: Pod "var-expansion-918bba25-263e-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.511483775s
STEP: Saw pod success
Dec 24 11:14:46.513: INFO: Pod "var-expansion-918bba25-263e-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:14:46.527: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-918bba25-263e-11ea-b7c4-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 24 11:14:46.734: INFO: Waiting for pod var-expansion-918bba25-263e-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:14:46.741: INFO: Pod var-expansion-918bba25-263e-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:14:46.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-rvncn" for this suite.
Dec 24 11:14:52.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:14:52.943: INFO: namespace: e2e-tests-var-expansion-rvncn, resource: bindings, ignored listing per whitelist
Dec 24 11:14:52.992: INFO: namespace e2e-tests-var-expansion-rvncn deletion completed in 6.244381134s

• [SLOW TEST:17.214 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:14:52.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:14:53.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7lr27" for this suite.
Dec 24 11:15:17.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:15:17.651: INFO: namespace: e2e-tests-pods-7lr27, resource: bindings, ignored listing per whitelist
Dec 24 11:15:17.744: INFO: namespace e2e-tests-pods-7lr27 deletion completed in 24.343872435s

• [SLOW TEST:24.751 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:15:17.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:15:28.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-sf7qd" for this suite.
Dec 24 11:16:10.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:16:10.398: INFO: namespace: e2e-tests-kubelet-test-sf7qd, resource: bindings, ignored listing per whitelist
Dec 24 11:16:10.419: INFO: namespace e2e-tests-kubelet-test-sf7qd deletion completed in 42.309993384s

• [SLOW TEST:52.675 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:16:10.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-ca13dda4-263e-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 11:16:10.860: INFO: Waiting up to 5m0s for pod "pod-configmaps-ca157ed5-263e-11ea-b7c4-0242ac110005" in namespace "e2e-tests-configmap-k6rbc" to be "success or failure"
Dec 24 11:16:10.888: INFO: Pod "pod-configmaps-ca157ed5-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.208837ms
Dec 24 11:16:12.910: INFO: Pod "pod-configmaps-ca157ed5-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049611406s
Dec 24 11:16:14.943: INFO: Pod "pod-configmaps-ca157ed5-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083134814s
Dec 24 11:16:17.018: INFO: Pod "pod-configmaps-ca157ed5-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157588673s
Dec 24 11:16:19.036: INFO: Pod "pod-configmaps-ca157ed5-263e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175468325s
Dec 24 11:16:21.049: INFO: Pod "pod-configmaps-ca157ed5-263e-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.188971429s
STEP: Saw pod success
Dec 24 11:16:21.049: INFO: Pod "pod-configmaps-ca157ed5-263e-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:16:21.059: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ca157ed5-263e-11ea-b7c4-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 24 11:16:21.524: INFO: Waiting for pod pod-configmaps-ca157ed5-263e-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:16:21.818: INFO: Pod pod-configmaps-ca157ed5-263e-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:16:21.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-k6rbc" for this suite.
Dec 24 11:16:27.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:16:28.190: INFO: namespace: e2e-tests-configmap-k6rbc, resource: bindings, ignored listing per whitelist
Dec 24 11:16:28.215: INFO: namespace e2e-tests-configmap-k6rbc deletion completed in 6.353583678s

• [SLOW TEST:17.794 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:16:28.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1224 11:17:09.043896       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 11:17:09.044: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:17:09.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-dtxfd" for this suite.
Dec 24 11:17:33.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:17:33.122: INFO: namespace: e2e-tests-gc-dtxfd, resource: bindings, ignored listing per whitelist
Dec 24 11:17:33.245: INFO: namespace e2e-tests-gc-dtxfd deletion completed in 24.193343739s

• [SLOW TEST:65.030 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:17:33.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-fb59e975-263e-11ea-b7c4-0242ac110005
Dec 24 11:17:33.557: INFO: Pod name my-hostname-basic-fb59e975-263e-11ea-b7c4-0242ac110005: Found 0 pods out of 1
Dec 24 11:17:38.596: INFO: Pod name my-hostname-basic-fb59e975-263e-11ea-b7c4-0242ac110005: Found 1 pods out of 1
Dec 24 11:17:38.596: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-fb59e975-263e-11ea-b7c4-0242ac110005" are running
Dec 24 11:17:44.681: INFO: Pod "my-hostname-basic-fb59e975-263e-11ea-b7c4-0242ac110005-cw2tg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 11:17:33 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 11:17:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fb59e975-263e-11ea-b7c4-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 11:17:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fb59e975-263e-11ea-b7c4-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 11:17:33 +0000 UTC Reason: Message:}])
Dec 24 11:17:44.681: INFO: Trying to dial the pod
Dec 24 11:17:49.729: INFO: Controller my-hostname-basic-fb59e975-263e-11ea-b7c4-0242ac110005: Got expected result from replica 1 [my-hostname-basic-fb59e975-263e-11ea-b7c4-0242ac110005-cw2tg]: "my-hostname-basic-fb59e975-263e-11ea-b7c4-0242ac110005-cw2tg", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:17:49.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-tww9j" for this suite.
Dec 24 11:17:55.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:17:55.865: INFO: namespace: e2e-tests-replication-controller-tww9j, resource: bindings, ignored listing per whitelist
Dec 24 11:17:55.911: INFO: namespace e2e-tests-replication-controller-tww9j deletion completed in 6.174680573s

• [SLOW TEST:22.666 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:17:55.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-nlz2
STEP: Creating a pod to test atomic-volume-subpath
Dec 24 11:17:56.137: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-nlz2" in namespace "e2e-tests-subpath-hcrwk" to be "success or failure"
Dec 24 11:17:56.160: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.111713ms
Dec 24 11:17:58.213: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07623935s
Dec 24 11:18:00.262: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124988437s
Dec 24 11:18:02.290: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153590291s
Dec 24 11:18:05.613: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.476465955s
Dec 24 11:18:07.639: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.502160826s
Dec 24 11:18:09.657: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.520094067s
Dec 24 11:18:12.056: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.919129506s
Dec 24 11:18:14.081: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.943951616s
Dec 24 11:18:16.100: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.962723321s
Dec 24 11:18:18.119: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Running", Reason="", readiness=false. Elapsed: 21.982430216s
Dec 24 11:18:20.147: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Running", Reason="", readiness=false. Elapsed: 24.010025973s
Dec 24 11:18:22.157: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Running", Reason="", readiness=false. Elapsed: 26.020282283s
Dec 24 11:18:24.211: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Running", Reason="", readiness=false. Elapsed: 28.073709964s
Dec 24 11:18:26.233: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Running", Reason="", readiness=false. Elapsed: 30.096116376s
Dec 24 11:18:28.248: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Running", Reason="", readiness=false. Elapsed: 32.111010985s
Dec 24 11:18:30.279: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Running", Reason="", readiness=false. Elapsed: 34.141857734s
Dec 24 11:18:32.443: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Running", Reason="", readiness=false. Elapsed: 36.306542698s
Dec 24 11:18:34.455: INFO: Pod "pod-subpath-test-projected-nlz2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.318090426s
STEP: Saw pod success
Dec 24 11:18:34.455: INFO: Pod "pod-subpath-test-projected-nlz2" satisfied condition "success or failure"
Dec 24 11:18:34.467: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-nlz2 container test-container-subpath-projected-nlz2: 
STEP: delete the pod
Dec 24 11:18:35.670: INFO: Waiting for pod pod-subpath-test-projected-nlz2 to disappear
Dec 24 11:18:35.689: INFO: Pod pod-subpath-test-projected-nlz2 no longer exists
STEP: Deleting pod pod-subpath-test-projected-nlz2
Dec 24 11:18:35.689: INFO: Deleting pod "pod-subpath-test-projected-nlz2" in namespace "e2e-tests-subpath-hcrwk"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:18:35.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hcrwk" for this suite.
Dec 24 11:18:41.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:18:42.117: INFO: namespace: e2e-tests-subpath-hcrwk, resource: bindings, ignored listing per whitelist
Dec 24 11:18:42.211: INFO: namespace e2e-tests-subpath-hcrwk deletion completed in 6.492157738s

• [SLOW TEST:46.300 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:18:42.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 11:18:42.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:18:53.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6jhdz" for this suite.
Dec 24 11:19:37.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:19:37.267: INFO: namespace: e2e-tests-pods-6jhdz, resource: bindings, ignored listing per whitelist
Dec 24 11:19:37.347: INFO: namespace e2e-tests-pods-6jhdz deletion completed in 44.198126986s

• [SLOW TEST:55.135 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:19:37.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 24 11:19:37.680: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s772g,SelfLink:/api/v1/namespaces/e2e-tests-watch-s772g/configmaps/e2e-watch-test-label-changed,UID:454dd7be-263f-11ea-a994-fa163e34d433,ResourceVersion:15893119,Generation:0,CreationTimestamp:2019-12-24 11:19:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 24 11:19:37.680: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s772g,SelfLink:/api/v1/namespaces/e2e-tests-watch-s772g/configmaps/e2e-watch-test-label-changed,UID:454dd7be-263f-11ea-a994-fa163e34d433,ResourceVersion:15893120,Generation:0,CreationTimestamp:2019-12-24 11:19:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 24 11:19:37.680: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s772g,SelfLink:/api/v1/namespaces/e2e-tests-watch-s772g/configmaps/e2e-watch-test-label-changed,UID:454dd7be-263f-11ea-a994-fa163e34d433,ResourceVersion:15893121,Generation:0,CreationTimestamp:2019-12-24 11:19:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 24 11:19:47.866: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s772g,SelfLink:/api/v1/namespaces/e2e-tests-watch-s772g/configmaps/e2e-watch-test-label-changed,UID:454dd7be-263f-11ea-a994-fa163e34d433,ResourceVersion:15893135,Generation:0,CreationTimestamp:2019-12-24 11:19:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 24 11:19:47.867: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s772g,SelfLink:/api/v1/namespaces/e2e-tests-watch-s772g/configmaps/e2e-watch-test-label-changed,UID:454dd7be-263f-11ea-a994-fa163e34d433,ResourceVersion:15893136,Generation:0,CreationTimestamp:2019-12-24 11:19:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 24 11:19:47.868: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s772g,SelfLink:/api/v1/namespaces/e2e-tests-watch-s772g/configmaps/e2e-watch-test-label-changed,UID:454dd7be-263f-11ea-a994-fa163e34d433,ResourceVersion:15893137,Generation:0,CreationTimestamp:2019-12-24 11:19:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:19:47.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-s772g" for this suite.
Dec 24 11:19:53.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:19:54.118: INFO: namespace: e2e-tests-watch-s772g, resource: bindings, ignored listing per whitelist
Dec 24 11:19:54.136: INFO: namespace e2e-tests-watch-s772g deletion completed in 6.239379684s

• [SLOW TEST:16.789 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:19:54.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Dec 24 11:19:54.973: INFO: created pod pod-service-account-defaultsa
Dec 24 11:19:54.973: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 24 11:19:55.081: INFO: created pod pod-service-account-mountsa
Dec 24 11:19:55.081: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 24 11:19:55.111: INFO: created pod pod-service-account-nomountsa
Dec 24 11:19:55.111: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 24 11:19:55.146: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 24 11:19:55.146: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 24 11:19:55.243: INFO: created pod pod-service-account-mountsa-mountspec
Dec 24 11:19:55.243: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 24 11:19:55.259: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 24 11:19:55.259: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 24 11:19:55.445: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 24 11:19:55.445: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 24 11:19:55.510: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 24 11:19:55.511: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 24 11:19:55.684: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 24 11:19:55.684: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:19:55.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-jm7bv" for this suite.
Dec 24 11:20:23.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:20:23.948: INFO: namespace: e2e-tests-svcaccounts-jm7bv, resource: bindings, ignored listing per whitelist
Dec 24 11:20:23.962: INFO: namespace e2e-tests-svcaccounts-jm7bv deletion completed in 28.225312995s

• [SLOW TEST:29.826 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:20:23.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 24 11:20:24.401: INFO: Waiting up to 5m0s for pod "pod-611ced56-263f-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-frjqj" to be "success or failure"
Dec 24 11:20:24.475: INFO: Pod "pod-611ced56-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 74.496546ms
Dec 24 11:20:26.594: INFO: Pod "pod-611ced56-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192696549s
Dec 24 11:20:28.621: INFO: Pod "pod-611ced56-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21978491s
Dec 24 11:20:30.640: INFO: Pod "pod-611ced56-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.238914059s
Dec 24 11:20:32.680: INFO: Pod "pod-611ced56-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.27947145s
Dec 24 11:20:34.700: INFO: Pod "pod-611ced56-263f-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.298763907s
STEP: Saw pod success
Dec 24 11:20:34.700: INFO: Pod "pod-611ced56-263f-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:20:34.725: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-611ced56-263f-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 11:20:34.913: INFO: Waiting for pod pod-611ced56-263f-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:20:34.929: INFO: Pod pod-611ced56-263f-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:20:34.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-frjqj" for this suite.
Dec 24 11:20:40.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:20:41.146: INFO: namespace: e2e-tests-emptydir-frjqj, resource: bindings, ignored listing per whitelist
Dec 24 11:20:41.199: INFO: namespace e2e-tests-emptydir-frjqj deletion completed in 6.259709272s

• [SLOW TEST:17.236 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:20:41.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 24 11:20:51.617: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-6b639707-263f-11ea-b7c4-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-m6bfx", SelfLink:"/api/v1/namespaces/e2e-tests-pods-m6bfx/pods/pod-submit-remove-6b639707-263f-11ea-b7c4-0242ac110005", UID:"6b6603f4-263f-11ea-a994-fa163e34d433", ResourceVersion:"15893347", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712783241, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"470607937"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bdqr8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002428740), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bdqr8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001711b88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000bf34a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001711bc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001711be0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001711be8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001711bec)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712783241, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712783250, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712783250, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712783241, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0021d48a0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0021d48c0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://d0b250ea48e1b26a15ef656adbdbe0fd1f4cf3f0689362e2b7b87c365de92397"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:21:02.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-m6bfx" for this suite.
Dec 24 11:21:08.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:21:08.872: INFO: namespace: e2e-tests-pods-m6bfx, resource: bindings, ignored listing per whitelist
Dec 24 11:21:08.986: INFO: namespace e2e-tests-pods-m6bfx deletion completed in 6.282906827s

• [SLOW TEST:27.788 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:21:08.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:21:22.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-85q6r" for this suite.
Dec 24 11:21:46.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:21:46.501: INFO: namespace: e2e-tests-replication-controller-85q6r, resource: bindings, ignored listing per whitelist
Dec 24 11:21:46.691: INFO: namespace e2e-tests-replication-controller-85q6r deletion completed in 24.405965295s

• [SLOW TEST:37.704 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:21:46.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 11:21:47.063: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9272ec6c-263f-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-ccrcx" to be "success or failure"
Dec 24 11:21:47.087: INFO: Pod "downwardapi-volume-9272ec6c-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.669377ms
Dec 24 11:21:49.101: INFO: Pod "downwardapi-volume-9272ec6c-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038595088s
Dec 24 11:21:51.158: INFO: Pod "downwardapi-volume-9272ec6c-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09548949s
Dec 24 11:21:53.180: INFO: Pod "downwardapi-volume-9272ec6c-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11740238s
Dec 24 11:21:55.202: INFO: Pod "downwardapi-volume-9272ec6c-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139253256s
Dec 24 11:21:57.270: INFO: Pod "downwardapi-volume-9272ec6c-263f-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.207071817s
STEP: Saw pod success
Dec 24 11:21:57.270: INFO: Pod "downwardapi-volume-9272ec6c-263f-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:21:57.283: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9272ec6c-263f-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 11:21:57.403: INFO: Waiting for pod downwardapi-volume-9272ec6c-263f-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:21:57.434: INFO: Pod downwardapi-volume-9272ec6c-263f-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:21:57.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ccrcx" for this suite.
Dec 24 11:22:05.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:22:05.798: INFO: namespace: e2e-tests-downward-api-ccrcx, resource: bindings, ignored listing per whitelist
Dec 24 11:22:05.888: INFO: namespace e2e-tests-downward-api-ccrcx deletion completed in 8.368261588s

• [SLOW TEST:19.196 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:22:05.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 24 11:22:06.213: INFO: Waiting up to 5m0s for pod "pod-9dd690a2-263f-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-r8n4p" to be "success or failure"
Dec 24 11:22:06.236: INFO: Pod "pod-9dd690a2-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.608595ms
Dec 24 11:22:08.293: INFO: Pod "pod-9dd690a2-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079634256s
Dec 24 11:22:10.307: INFO: Pod "pod-9dd690a2-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094448101s
Dec 24 11:22:12.617: INFO: Pod "pod-9dd690a2-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40394003s
Dec 24 11:22:14.662: INFO: Pod "pod-9dd690a2-263f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.449049875s
Dec 24 11:22:16.717: INFO: Pod "pod-9dd690a2-263f-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.504615832s
STEP: Saw pod success
Dec 24 11:22:16.718: INFO: Pod "pod-9dd690a2-263f-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:22:16.738: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9dd690a2-263f-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 11:22:16.788: INFO: Waiting for pod pod-9dd690a2-263f-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:22:16.871: INFO: Pod pod-9dd690a2-263f-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:22:16.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-r8n4p" for this suite.
Dec 24 11:22:22.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:22:22.997: INFO: namespace: e2e-tests-emptydir-r8n4p, resource: bindings, ignored listing per whitelist
Dec 24 11:22:23.070: INFO: namespace e2e-tests-emptydir-r8n4p deletion completed in 6.190157159s

• [SLOW TEST:17.181 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:22:23.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-tw7nl
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 24 11:22:23.270: INFO: Found 0 stateful pods, waiting for 3
Dec 24 11:22:33.900: INFO: Found 2 stateful pods, waiting for 3
Dec 24 11:22:43.298: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:22:43.298: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:22:43.298: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 24 11:22:53.300: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:22:53.300: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:22:53.300: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 24 11:22:53.373: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 24 11:23:03.490: INFO: Updating stateful set ss2
Dec 24 11:23:03.521: INFO: Waiting for Pod e2e-tests-statefulset-tw7nl/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 24 11:23:16.516: INFO: Found 2 stateful pods, waiting for 3
Dec 24 11:23:26.662: INFO: Found 2 stateful pods, waiting for 3
Dec 24 11:23:36.775: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:23:36.775: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:23:36.775: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 24 11:23:46.613: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:23:46.613: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:23:46.613: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 24 11:23:46.657: INFO: Updating stateful set ss2
Dec 24 11:23:46.708: INFO: Waiting for Pod e2e-tests-statefulset-tw7nl/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 11:23:56.769: INFO: Updating stateful set ss2
Dec 24 11:23:56.784: INFO: Waiting for StatefulSet e2e-tests-statefulset-tw7nl/ss2 to complete update
Dec 24 11:23:56.784: INFO: Waiting for Pod e2e-tests-statefulset-tw7nl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 11:24:06.871: INFO: Waiting for StatefulSet e2e-tests-statefulset-tw7nl/ss2 to complete update
Dec 24 11:24:06.871: INFO: Waiting for Pod e2e-tests-statefulset-tw7nl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 11:24:17.133: INFO: Waiting for StatefulSet e2e-tests-statefulset-tw7nl/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 24 11:24:26.811: INFO: Deleting all statefulset in ns e2e-tests-statefulset-tw7nl
Dec 24 11:24:26.817: INFO: Scaling statefulset ss2 to 0
Dec 24 11:24:46.939: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 11:24:46.949: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:24:47.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-tw7nl" for this suite.
Dec 24 11:24:55.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:24:55.256: INFO: namespace: e2e-tests-statefulset-tw7nl, resource: bindings, ignored listing per whitelist
Dec 24 11:24:55.437: INFO: namespace e2e-tests-statefulset-tw7nl deletion completed in 8.426593658s

• [SLOW TEST:152.367 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:24:55.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 11:24:55.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-72v4j'
Dec 24 11:24:57.869: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 24 11:24:57.869: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Dec 24 11:25:01.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-72v4j'
Dec 24 11:25:02.178: INFO: stderr: ""
Dec 24 11:25:02.179: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:25:02.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-72v4j" for this suite.
Dec 24 11:25:08.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:25:08.793: INFO: namespace: e2e-tests-kubectl-72v4j, resource: bindings, ignored listing per whitelist
Dec 24 11:25:08.948: INFO: namespace e2e-tests-kubectl-72v4j deletion completed in 6.755244684s

• [SLOW TEST:13.509 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:25:08.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 11:25:09.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-59ck8'
Dec 24 11:25:09.238: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 24 11:25:09.239: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 24 11:25:09.281: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 24 11:25:09.314: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 24 11:25:09.338: INFO: scanned /root for discovery docs: 
Dec 24 11:25:09.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-59ck8'
Dec 24 11:25:34.692: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 24 11:25:34.692: INFO: stdout: "Created e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8\nScaling up e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 24 11:25:34.692: INFO: stdout: "Created e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8\nScaling up e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 24 11:25:34.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-59ck8'
Dec 24 11:25:34.818: INFO: stderr: ""
Dec 24 11:25:34.818: INFO: stdout: "e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8-6c7nl "
Dec 24 11:25:34.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8-6c7nl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-59ck8'
Dec 24 11:25:34.936: INFO: stderr: ""
Dec 24 11:25:34.936: INFO: stdout: "true"
Dec 24 11:25:34.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8-6c7nl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-59ck8'
Dec 24 11:25:35.094: INFO: stderr: ""
Dec 24 11:25:35.094: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 24 11:25:35.094: INFO: e2e-test-nginx-rc-d8eadfb79476cea7d033897d0a4456e8-6c7nl is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Dec 24 11:25:35.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-59ck8'
Dec 24 11:25:35.216: INFO: stderr: ""
Dec 24 11:25:35.216: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:25:35.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-59ck8" for this suite.
Dec 24 11:25:59.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:25:59.472: INFO: namespace: e2e-tests-kubectl-59ck8, resource: bindings, ignored listing per whitelist
Dec 24 11:25:59.520: INFO: namespace e2e-tests-kubectl-59ck8 deletion completed in 24.291483898s

• [SLOW TEST:50.572 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:25:59.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Dec 24 11:25:59.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-p4skh run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 24 11:26:10.318: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 24 11:26:10.319: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:26:12.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p4skh" for this suite.
Dec 24 11:26:19.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:26:19.220: INFO: namespace: e2e-tests-kubectl-p4skh, resource: bindings, ignored listing per whitelist
Dec 24 11:26:19.427: INFO: namespace e2e-tests-kubectl-p4skh deletion completed in 7.086719629s

• [SLOW TEST:19.907 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:26:19.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-tqmjk
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Dec 24 11:26:19.792: INFO: Found 0 stateful pods, waiting for 3
Dec 24 11:26:29.844: INFO: Found 1 stateful pods, waiting for 3
Dec 24 11:26:39.834: INFO: Found 2 stateful pods, waiting for 3
Dec 24 11:26:49.812: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:26:49.812: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:26:49.812: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 24 11:26:59.848: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:26:59.849: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:26:59.849: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 11:26:59.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tqmjk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 11:27:00.612: INFO: stderr: ""
Dec 24 11:27:00.612: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 11:27:00.612: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 24 11:27:10.715: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 24 11:27:21.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tqmjk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:27:21.983: INFO: stderr: ""
Dec 24 11:27:21.983: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 11:27:21.983: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 11:27:32.097: INFO: Waiting for StatefulSet e2e-tests-statefulset-tqmjk/ss2 to complete update
Dec 24 11:27:32.097: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 11:27:32.097: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 11:27:32.097: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 11:27:42.234: INFO: Waiting for StatefulSet e2e-tests-statefulset-tqmjk/ss2 to complete update
Dec 24 11:27:42.234: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 11:27:42.234: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 11:27:52.192: INFO: Waiting for StatefulSet e2e-tests-statefulset-tqmjk/ss2 to complete update
Dec 24 11:27:52.192: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 11:27:52.192: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 11:28:02.128: INFO: Waiting for StatefulSet e2e-tests-statefulset-tqmjk/ss2 to complete update
Dec 24 11:28:02.128: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 11:28:12.139: INFO: Waiting for StatefulSet e2e-tests-statefulset-tqmjk/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 24 11:28:22.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tqmjk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 11:28:23.027: INFO: stderr: ""
Dec 24 11:28:23.027: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 11:28:23.028: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 11:28:33.113: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 24 11:28:43.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tqmjk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 11:28:44.435: INFO: stderr: ""
Dec 24 11:28:44.435: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 11:28:44.435: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 11:28:44.555: INFO: Waiting for StatefulSet e2e-tests-statefulset-tqmjk/ss2 to complete update
Dec 24 11:28:44.555: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 11:28:44.555: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 11:28:44.556: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 11:28:54.603: INFO: Waiting for StatefulSet e2e-tests-statefulset-tqmjk/ss2 to complete update
Dec 24 11:28:54.603: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 11:28:54.603: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 11:29:04.644: INFO: Waiting for StatefulSet e2e-tests-statefulset-tqmjk/ss2 to complete update
Dec 24 11:29:04.644: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 11:29:04.644: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 11:29:14.596: INFO: Waiting for StatefulSet e2e-tests-statefulset-tqmjk/ss2 to complete update
Dec 24 11:29:14.596: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 11:29:24.983: INFO: Waiting for StatefulSet e2e-tests-statefulset-tqmjk/ss2 to complete update
Dec 24 11:29:24.984: INFO: Waiting for Pod e2e-tests-statefulset-tqmjk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 11:29:34.625: INFO: Waiting for StatefulSet e2e-tests-statefulset-tqmjk/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 24 11:29:44.593: INFO: Deleting all statefulset in ns e2e-tests-statefulset-tqmjk
Dec 24 11:29:44.723: INFO: Scaling statefulset ss2 to 0
Dec 24 11:30:14.899: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 11:30:14.914: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:30:15.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-tqmjk" for this suite.
Dec 24 11:30:23.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:30:23.166: INFO: namespace: e2e-tests-statefulset-tqmjk, resource: bindings, ignored listing per whitelist
Dec 24 11:30:23.228: INFO: namespace e2e-tests-statefulset-tqmjk deletion completed in 8.209031399s

• [SLOW TEST:243.800 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:30:23.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 24 11:30:45.855: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 11:30:45.883: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 11:30:47.884: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 11:30:47.907: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 11:30:49.884: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 11:30:49.912: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 11:30:51.884: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 11:30:51.922: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 11:30:53.884: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 11:30:53.930: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 11:30:55.885: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 11:30:55.982: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 11:30:57.890: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 11:30:57.950: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 11:30:59.884: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 11:30:59.918: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 11:31:01.885: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 11:31:01.909: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 11:31:03.884: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 11:31:03.912: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:31:03.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-wc2br" for this suite.
Dec 24 11:31:28.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:31:28.188: INFO: namespace: e2e-tests-container-lifecycle-hook-wc2br, resource: bindings, ignored listing per whitelist
Dec 24 11:31:28.210: INFO: namespace e2e-tests-container-lifecycle-hook-wc2br deletion completed in 24.214082239s

• [SLOW TEST:64.981 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:31:28.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-vnsjz
Dec 24 11:31:38.589: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-vnsjz
STEP: checking the pod's current state and verifying that restartCount is present
Dec 24 11:31:38.657: INFO: Initial restart count of pod liveness-http is 0
Dec 24 11:31:59.124: INFO: Restart count of pod e2e-tests-container-probe-vnsjz/liveness-http is now 1 (20.466543616s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:31:59.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-vnsjz" for this suite.
Dec 24 11:32:07.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:32:07.373: INFO: namespace: e2e-tests-container-probe-vnsjz, resource: bindings, ignored listing per whitelist
Dec 24 11:32:07.510: INFO: namespace e2e-tests-container-probe-vnsjz deletion completed in 8.32303474s

• [SLOW TEST:39.299 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:32:07.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 11:32:33.805: INFO: Container started at 2019-12-24 11:32:16 +0000 UTC, pod became ready at 2019-12-24 11:32:32 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:32:33.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9jm79" for this suite.
Dec 24 11:32:59.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:33:00.091: INFO: namespace: e2e-tests-container-probe-9jm79, resource: bindings, ignored listing per whitelist
Dec 24 11:33:00.100: INFO: namespace e2e-tests-container-probe-9jm79 deletion completed in 26.287738716s

• [SLOW TEST:52.590 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:33:00.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 24 11:33:00.564: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-5mtgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-5mtgh/configmaps/e2e-watch-test-resource-version,UID:23ca97a8-2641-11ea-a994-fa163e34d433,ResourceVersion:15895176,Generation:0,CreationTimestamp:2019-12-24 11:33:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 24 11:33:00.564: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-5mtgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-5mtgh/configmaps/e2e-watch-test-resource-version,UID:23ca97a8-2641-11ea-a994-fa163e34d433,ResourceVersion:15895177,Generation:0,CreationTimestamp:2019-12-24 11:33:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:33:00.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-5mtgh" for this suite.
Dec 24 11:33:06.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:33:06.969: INFO: namespace: e2e-tests-watch-5mtgh, resource: bindings, ignored listing per whitelist
Dec 24 11:33:07.015: INFO: namespace e2e-tests-watch-5mtgh deletion completed in 6.43652573s

• [SLOW TEST:6.914 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:33:07.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 11:33:07.362: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 24 11:33:07.439: INFO: Number of nodes with available pods: 0
Dec 24 11:33:07.439: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:08.898: INFO: Number of nodes with available pods: 0
Dec 24 11:33:08.898: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:09.463: INFO: Number of nodes with available pods: 0
Dec 24 11:33:09.463: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:10.768: INFO: Number of nodes with available pods: 0
Dec 24 11:33:10.768: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:11.458: INFO: Number of nodes with available pods: 0
Dec 24 11:33:11.458: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:12.491: INFO: Number of nodes with available pods: 0
Dec 24 11:33:12.491: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:13.473: INFO: Number of nodes with available pods: 0
Dec 24 11:33:13.473: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:14.732: INFO: Number of nodes with available pods: 0
Dec 24 11:33:14.732: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:15.474: INFO: Number of nodes with available pods: 0
Dec 24 11:33:15.474: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:16.480: INFO: Number of nodes with available pods: 0
Dec 24 11:33:16.480: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:17.465: INFO: Number of nodes with available pods: 0
Dec 24 11:33:17.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:18.477: INFO: Number of nodes with available pods: 1
Dec 24 11:33:18.477: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 24 11:33:18.632: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:19.734: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:20.738: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:21.735: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:22.837: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:23.746: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:24.731: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:24.732: INFO: Pod daemon-set-q8w6j is not available
Dec 24 11:33:25.734: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:25.734: INFO: Pod daemon-set-q8w6j is not available
Dec 24 11:33:26.733: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:26.733: INFO: Pod daemon-set-q8w6j is not available
Dec 24 11:33:27.731: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:27.731: INFO: Pod daemon-set-q8w6j is not available
Dec 24 11:33:28.786: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:28.787: INFO: Pod daemon-set-q8w6j is not available
Dec 24 11:33:29.737: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:29.737: INFO: Pod daemon-set-q8w6j is not available
Dec 24 11:33:30.737: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:30.737: INFO: Pod daemon-set-q8w6j is not available
Dec 24 11:33:31.739: INFO: Wrong image for pod: daemon-set-q8w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 11:33:31.739: INFO: Pod daemon-set-q8w6j is not available
Dec 24 11:33:32.836: INFO: Pod daemon-set-9jhvz is not available
Dec 24 11:33:33.741: INFO: Pod daemon-set-9jhvz is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 24 11:33:33.769: INFO: Number of nodes with available pods: 0
Dec 24 11:33:33.769: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:34.852: INFO: Number of nodes with available pods: 0
Dec 24 11:33:34.852: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:35.838: INFO: Number of nodes with available pods: 0
Dec 24 11:33:35.838: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:36.803: INFO: Number of nodes with available pods: 0
Dec 24 11:33:36.803: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:37.803: INFO: Number of nodes with available pods: 0
Dec 24 11:33:37.803: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:39.058: INFO: Number of nodes with available pods: 0
Dec 24 11:33:39.058: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:41.256: INFO: Number of nodes with available pods: 0
Dec 24 11:33:41.256: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:42.032: INFO: Number of nodes with available pods: 0
Dec 24 11:33:42.032: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:42.905: INFO: Number of nodes with available pods: 0
Dec 24 11:33:42.905: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:43.792: INFO: Number of nodes with available pods: 0
Dec 24 11:33:43.792: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 11:33:44.828: INFO: Number of nodes with available pods: 1
Dec 24 11:33:44.828: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6pjmx, will wait for the garbage collector to delete the pods
Dec 24 11:33:44.952: INFO: Deleting DaemonSet.extensions daemon-set took: 19.993983ms
Dec 24 11:33:45.053: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.556235ms
Dec 24 11:34:02.670: INFO: Number of nodes with available pods: 0
Dec 24 11:34:02.670: INFO: Number of running nodes: 0, number of available pods: 0
Dec 24 11:34:02.676: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6pjmx/daemonsets","resourceVersion":"15895305"},"items":null}

Dec 24 11:34:02.679: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6pjmx/pods","resourceVersion":"15895305"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:34:02.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-6pjmx" for this suite.
Dec 24 11:34:10.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:34:10.994: INFO: namespace: e2e-tests-daemonsets-6pjmx, resource: bindings, ignored listing per whitelist
Dec 24 11:34:11.021: INFO: namespace e2e-tests-daemonsets-6pjmx deletion completed in 8.321181397s

• [SLOW TEST:64.006 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:34:11.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 24 11:34:11.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:11.687: INFO: stderr: ""
Dec 24 11:34:11.688: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 24 11:34:11.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:12.360: INFO: stderr: ""
Dec 24 11:34:12.360: INFO: stdout: "update-demo-nautilus-hx54k update-demo-nautilus-sb8ds "
Dec 24 11:34:12.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hx54k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:12.627: INFO: stderr: ""
Dec 24 11:34:12.627: INFO: stdout: ""
Dec 24 11:34:12.627: INFO: update-demo-nautilus-hx54k is created but not running
Dec 24 11:34:17.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:17.835: INFO: stderr: ""
Dec 24 11:34:17.835: INFO: stdout: "update-demo-nautilus-hx54k update-demo-nautilus-sb8ds "
Dec 24 11:34:17.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hx54k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:18.132: INFO: stderr: ""
Dec 24 11:34:18.132: INFO: stdout: ""
Dec 24 11:34:18.132: INFO: update-demo-nautilus-hx54k is created but not running
Dec 24 11:34:23.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:23.342: INFO: stderr: ""
Dec 24 11:34:23.342: INFO: stdout: "update-demo-nautilus-hx54k update-demo-nautilus-sb8ds "
Dec 24 11:34:23.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hx54k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:23.512: INFO: stderr: ""
Dec 24 11:34:23.512: INFO: stdout: ""
Dec 24 11:34:23.512: INFO: update-demo-nautilus-hx54k is created but not running
Dec 24 11:34:28.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:28.702: INFO: stderr: ""
Dec 24 11:34:28.702: INFO: stdout: "update-demo-nautilus-hx54k update-demo-nautilus-sb8ds "
Dec 24 11:34:28.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hx54k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:28.883: INFO: stderr: ""
Dec 24 11:34:28.883: INFO: stdout: "true"
Dec 24 11:34:28.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hx54k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:29.027: INFO: stderr: ""
Dec 24 11:34:29.027: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 11:34:29.027: INFO: validating pod update-demo-nautilus-hx54k
Dec 24 11:34:29.146: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 11:34:29.147: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 11:34:29.147: INFO: update-demo-nautilus-hx54k is verified up and running
Dec 24 11:34:29.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sb8ds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:29.289: INFO: stderr: ""
Dec 24 11:34:29.289: INFO: stdout: "true"
Dec 24 11:34:29.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sb8ds -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:29.393: INFO: stderr: ""
Dec 24 11:34:29.393: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 11:34:29.393: INFO: validating pod update-demo-nautilus-sb8ds
Dec 24 11:34:29.410: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 11:34:29.410: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 11:34:29.410: INFO: update-demo-nautilus-sb8ds is verified up and running
STEP: scaling down the replication controller
Dec 24 11:34:29.413: INFO: scanned /root for discovery docs: 
Dec 24 11:34:29.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:30.824: INFO: stderr: ""
Dec 24 11:34:30.824: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 24 11:34:30.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:31.051: INFO: stderr: ""
Dec 24 11:34:31.051: INFO: stdout: "update-demo-nautilus-hx54k update-demo-nautilus-sb8ds "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 24 11:34:36.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:36.240: INFO: stderr: ""
Dec 24 11:34:36.240: INFO: stdout: "update-demo-nautilus-hx54k update-demo-nautilus-sb8ds "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 24 11:34:41.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:41.466: INFO: stderr: ""
Dec 24 11:34:41.466: INFO: stdout: "update-demo-nautilus-hx54k update-demo-nautilus-sb8ds "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 24 11:34:46.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:46.657: INFO: stderr: ""
Dec 24 11:34:46.657: INFO: stdout: "update-demo-nautilus-hx54k "
Dec 24 11:34:46.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hx54k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:46.774: INFO: stderr: ""
Dec 24 11:34:46.774: INFO: stdout: "true"
Dec 24 11:34:46.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hx54k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:46.930: INFO: stderr: ""
Dec 24 11:34:46.930: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 11:34:46.930: INFO: validating pod update-demo-nautilus-hx54k
Dec 24 11:34:46.941: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 11:34:46.941: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 11:34:46.941: INFO: update-demo-nautilus-hx54k is verified up and running
STEP: scaling up the replication controller
Dec 24 11:34:46.944: INFO: scanned /root for discovery docs: 
Dec 24 11:34:46.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:48.222: INFO: stderr: ""
Dec 24 11:34:48.222: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 24 11:34:48.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:48.382: INFO: stderr: ""
Dec 24 11:34:48.382: INFO: stdout: "update-demo-nautilus-8k7jn update-demo-nautilus-hx54k "
Dec 24 11:34:48.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8k7jn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:48.666: INFO: stderr: ""
Dec 24 11:34:48.666: INFO: stdout: ""
Dec 24 11:34:48.666: INFO: update-demo-nautilus-8k7jn is created but not running
Dec 24 11:34:53.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:53.875: INFO: stderr: ""
Dec 24 11:34:53.876: INFO: stdout: "update-demo-nautilus-8k7jn update-demo-nautilus-hx54k "
Dec 24 11:34:53.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8k7jn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:34:54.097: INFO: stderr: ""
Dec 24 11:34:54.097: INFO: stdout: ""
Dec 24 11:34:54.097: INFO: update-demo-nautilus-8k7jn is created but not running
Dec 24 11:34:59.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:35:01.343: INFO: stderr: ""
Dec 24 11:35:01.344: INFO: stdout: "update-demo-nautilus-8k7jn update-demo-nautilus-hx54k "
Dec 24 11:35:01.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8k7jn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:35:01.500: INFO: stderr: ""
Dec 24 11:35:01.500: INFO: stdout: "true"
Dec 24 11:35:01.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8k7jn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:35:01.596: INFO: stderr: ""
Dec 24 11:35:01.596: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 11:35:01.596: INFO: validating pod update-demo-nautilus-8k7jn
Dec 24 11:35:01.609: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 11:35:01.610: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 11:35:01.610: INFO: update-demo-nautilus-8k7jn is verified up and running
Dec 24 11:35:01.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hx54k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:35:01.751: INFO: stderr: ""
Dec 24 11:35:01.751: INFO: stdout: "true"
Dec 24 11:35:01.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hx54k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:35:01.903: INFO: stderr: ""
Dec 24 11:35:01.903: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 11:35:01.903: INFO: validating pod update-demo-nautilus-hx54k
Dec 24 11:35:01.933: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 11:35:01.933: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 11:35:01.933: INFO: update-demo-nautilus-hx54k is verified up and running
STEP: using delete to clean up resources
Dec 24 11:35:01.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:35:02.311: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 11:35:02.312: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 24 11:35:02.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-sclxb'
Dec 24 11:35:02.582: INFO: stderr: "No resources found.\n"
Dec 24 11:35:02.582: INFO: stdout: ""
Dec 24 11:35:02.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-sclxb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 24 11:35:02.868: INFO: stderr: ""
Dec 24 11:35:02.868: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:35:02.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sclxb" for this suite.
Dec 24 11:35:25.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:35:25.215: INFO: namespace: e2e-tests-kubectl-sclxb, resource: bindings, ignored listing per whitelist
Dec 24 11:35:25.246: INFO: namespace e2e-tests-kubectl-sclxb deletion completed in 22.352227102s

• [SLOW TEST:74.225 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:35:25.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-5vgq9
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 24 11:35:25.524: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 24 11:35:57.876: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-5vgq9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 11:35:57.876: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 11:35:58.480: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:35:58.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-5vgq9" for this suite.
Dec 24 11:36:24.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:36:24.797: INFO: namespace: e2e-tests-pod-network-test-5vgq9, resource: bindings, ignored listing per whitelist
Dec 24 11:36:24.805: INFO: namespace e2e-tests-pod-network-test-5vgq9 deletion completed in 26.297826345s

• [SLOW TEST:59.558 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:36:24.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 11:36:25.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9dde6c12-2641-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-t2xbq" to be "success or failure"
Dec 24 11:36:25.190: INFO: Pod "downwardapi-volume-9dde6c12-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.539286ms
Dec 24 11:36:27.418: INFO: Pod "downwardapi-volume-9dde6c12-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237475874s
Dec 24 11:36:29.443: INFO: Pod "downwardapi-volume-9dde6c12-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.262570964s
Dec 24 11:36:31.462: INFO: Pod "downwardapi-volume-9dde6c12-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.28146189s
Dec 24 11:36:33.516: INFO: Pod "downwardapi-volume-9dde6c12-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.335405189s
Dec 24 11:36:35.617: INFO: Pod "downwardapi-volume-9dde6c12-2641-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.437198318s
STEP: Saw pod success
Dec 24 11:36:35.618: INFO: Pod "downwardapi-volume-9dde6c12-2641-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:36:35.630: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9dde6c12-2641-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 11:36:36.095: INFO: Waiting for pod downwardapi-volume-9dde6c12-2641-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:36:36.106: INFO: Pod downwardapi-volume-9dde6c12-2641-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:36:36.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t2xbq" for this suite.
Dec 24 11:36:42.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:36:42.255: INFO: namespace: e2e-tests-downward-api-t2xbq, resource: bindings, ignored listing per whitelist
Dec 24 11:36:42.392: INFO: namespace e2e-tests-downward-api-t2xbq deletion completed in 6.276301406s

• [SLOW TEST:17.587 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:36:42.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 24 11:36:42.740: INFO: Waiting up to 5m0s for pod "client-containers-a8566d4a-2641-11ea-b7c4-0242ac110005" in namespace "e2e-tests-containers-mrrjh" to be "success or failure"
Dec 24 11:36:42.763: INFO: Pod "client-containers-a8566d4a-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.372448ms
Dec 24 11:36:44.780: INFO: Pod "client-containers-a8566d4a-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039580856s
Dec 24 11:36:46.798: INFO: Pod "client-containers-a8566d4a-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05751782s
Dec 24 11:36:49.291: INFO: Pod "client-containers-a8566d4a-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.550354814s
Dec 24 11:36:51.303: INFO: Pod "client-containers-a8566d4a-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56280212s
Dec 24 11:36:53.320: INFO: Pod "client-containers-a8566d4a-2641-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.579543783s
STEP: Saw pod success
Dec 24 11:36:53.320: INFO: Pod "client-containers-a8566d4a-2641-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:36:53.326: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-a8566d4a-2641-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 11:36:54.007: INFO: Waiting for pod client-containers-a8566d4a-2641-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:36:54.014: INFO: Pod client-containers-a8566d4a-2641-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:36:54.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-mrrjh" for this suite.
Dec 24 11:37:00.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:37:00.305: INFO: namespace: e2e-tests-containers-mrrjh, resource: bindings, ignored listing per whitelist
Dec 24 11:37:00.504: INFO: namespace e2e-tests-containers-mrrjh deletion completed in 6.423776067s

• [SLOW TEST:18.111 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:37:00.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 24 11:37:00.775: INFO: Waiting up to 5m0s for pod "downward-api-b3151298-2641-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-9w8p5" to be "success or failure"
Dec 24 11:37:00.834: INFO: Pod "downward-api-b3151298-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 59.241704ms
Dec 24 11:37:02.854: INFO: Pod "downward-api-b3151298-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078893027s
Dec 24 11:37:04.866: INFO: Pod "downward-api-b3151298-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091649119s
Dec 24 11:37:06.888: INFO: Pod "downward-api-b3151298-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113532878s
Dec 24 11:37:08.902: INFO: Pod "downward-api-b3151298-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127246272s
Dec 24 11:37:11.466: INFO: Pod "downward-api-b3151298-2641-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.690820071s
STEP: Saw pod success
Dec 24 11:37:11.466: INFO: Pod "downward-api-b3151298-2641-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:37:11.486: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-b3151298-2641-11ea-b7c4-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 24 11:37:11.843: INFO: Waiting for pod downward-api-b3151298-2641-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:37:11.866: INFO: Pod downward-api-b3151298-2641-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:37:11.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9w8p5" for this suite.
Dec 24 11:37:17.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:37:18.158: INFO: namespace: e2e-tests-downward-api-9w8p5, resource: bindings, ignored listing per whitelist
Dec 24 11:37:18.219: INFO: namespace e2e-tests-downward-api-9w8p5 deletion completed in 6.340625751s

• [SLOW TEST:17.715 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:37:18.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 11:37:18.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-fjmj7'
Dec 24 11:37:18.578: INFO: stderr: ""
Dec 24 11:37:18.578: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 24 11:37:28.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-fjmj7 -o json'
Dec 24 11:37:28.797: INFO: stderr: ""
Dec 24 11:37:28.797: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-24T11:37:18Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-fjmj7\",\n        \"resourceVersion\": \"15895774\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-fjmj7/pods/e2e-test-nginx-pod\",\n        \"uid\": \"bdad0efd-2641-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-7wft2\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-7wft2\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-7wft2\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-24T11:37:18Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-24T11:37:27Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-24T11:37:27Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-24T11:37:18Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://804e15fd26978f35c0839379bf033ebbc21eed0d49ead061d5eff78eb1acdf43\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-24T11:37:25Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-24T11:37:18Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 24 11:37:28.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-fjmj7'
Dec 24 11:37:29.395: INFO: stderr: ""
Dec 24 11:37:29.395: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 24 11:37:29.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-fjmj7'
Dec 24 11:37:37.852: INFO: stderr: ""
Dec 24 11:37:37.852: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:37:37.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fjmj7" for this suite.
Dec 24 11:37:46.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:37:46.172: INFO: namespace: e2e-tests-kubectl-fjmj7, resource: bindings, ignored listing per whitelist
Dec 24 11:37:46.237: INFO: namespace e2e-tests-kubectl-fjmj7 deletion completed in 8.3421486s

• [SLOW TEST:28.018 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:37:46.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-ce4856f3-2641-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 11:37:46.454: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-rhmqw" to be "success or failure"
Dec 24 11:37:46.465: INFO: Pod "pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.77889ms
Dec 24 11:37:48.830: INFO: Pod "pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376565855s
Dec 24 11:37:50.918: INFO: Pod "pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.464531012s
Dec 24 11:37:52.930: INFO: Pod "pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476638029s
Dec 24 11:37:54.949: INFO: Pod "pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.495004051s
Dec 24 11:37:56.959: INFO: Pod "pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.504980636s
Dec 24 11:37:59.252: INFO: Pod "pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.797767726s
STEP: Saw pod success
Dec 24 11:37:59.252: INFO: Pod "pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:37:59.261: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 24 11:37:59.581: INFO: Waiting for pod pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:37:59.596: INFO: Pod pod-projected-secrets-ce4ddec3-2641-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:37:59.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rhmqw" for this suite.
Dec 24 11:38:05.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:38:05.979: INFO: namespace: e2e-tests-projected-rhmqw, resource: bindings, ignored listing per whitelist
Dec 24 11:38:06.079: INFO: namespace e2e-tests-projected-rhmqw deletion completed in 6.395089897s

• [SLOW TEST:19.841 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:38:06.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-96lln
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-96lln
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-96lln
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-96lln
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-96lln
Dec 24 11:38:20.814: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-96lln, name: ss-0, uid: df62703a-2641-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 24 11:38:22.516: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-96lln, name: ss-0, uid: df62703a-2641-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 24 11:38:22.813: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-96lln, name: ss-0, uid: df62703a-2641-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 24 11:38:22.845: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-96lln
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-96lln
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-96lln and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 24 11:38:35.997: INFO: Deleting all statefulset in ns e2e-tests-statefulset-96lln
Dec 24 11:38:36.009: INFO: Scaling statefulset ss to 0
Dec 24 11:38:56.064: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 11:38:56.072: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:38:56.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-96lln" for this suite.
Dec 24 11:39:04.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:39:04.404: INFO: namespace: e2e-tests-statefulset-96lln, resource: bindings, ignored listing per whitelist
Dec 24 11:39:04.446: INFO: namespace e2e-tests-statefulset-96lln deletion completed in 8.281178025s

• [SLOW TEST:58.366 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:39:04.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-fd0af4ca-2641-11ea-b7c4-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-fd0af4b1-2641-11ea-b7c4-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 24 11:39:04.867: INFO: Waiting up to 5m0s for pod "projected-volume-fd0af43d-2641-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-9b6fz" to be "success or failure"
Dec 24 11:39:04.873: INFO: Pod "projected-volume-fd0af43d-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.822748ms
Dec 24 11:39:07.028: INFO: Pod "projected-volume-fd0af43d-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161274891s
Dec 24 11:39:09.036: INFO: Pod "projected-volume-fd0af43d-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16879395s
Dec 24 11:39:11.213: INFO: Pod "projected-volume-fd0af43d-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.345907649s
Dec 24 11:39:13.230: INFO: Pod "projected-volume-fd0af43d-2641-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.362410532s
Dec 24 11:39:15.252: INFO: Pod "projected-volume-fd0af43d-2641-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.384484403s
STEP: Saw pod success
Dec 24 11:39:15.252: INFO: Pod "projected-volume-fd0af43d-2641-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:39:15.267: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-fd0af43d-2641-11ea-b7c4-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Dec 24 11:39:15.347: INFO: Waiting for pod projected-volume-fd0af43d-2641-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:39:15.367: INFO: Pod projected-volume-fd0af43d-2641-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:39:15.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9b6fz" for this suite.
Dec 24 11:39:23.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:39:23.695: INFO: namespace: e2e-tests-projected-9b6fz, resource: bindings, ignored listing per whitelist
Dec 24 11:39:23.837: INFO: namespace e2e-tests-projected-9b6fz deletion completed in 8.463308879s

• [SLOW TEST:19.391 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:39:23.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 11:39:24.304: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Dec 24 11:39:24.409: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hlml5/daemonsets","resourceVersion":"15896132"},"items":null}

Dec 24 11:39:24.416: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hlml5/pods","resourceVersion":"15896132"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:39:24.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-hlml5" for this suite.
Dec 24 11:39:30.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:39:30.629: INFO: namespace: e2e-tests-daemonsets-hlml5, resource: bindings, ignored listing per whitelist
Dec 24 11:39:30.763: INFO: namespace e2e-tests-daemonsets-hlml5 deletion completed in 6.329237074s

S [SKIPPING] [6.925 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Dec 24 11:39:24.304: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:39:30.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 11:39:30.951: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 24 11:39:36.693: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 24 11:39:41.056: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 24 11:39:43.070: INFO: Creating deployment "test-rollover-deployment"
Dec 24 11:39:43.154: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 24 11:39:46.082: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 24 11:39:46.145: INFO: Ensure that both replica sets have 1 created replica
Dec 24 11:39:46.160: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 24 11:39:46.495: INFO: Updating deployment test-rollover-deployment
Dec 24 11:39:46.495: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 24 11:39:48.691: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 24 11:39:49.183: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 24 11:39:49.235: INFO: all replica sets need to contain the pod-template-hash label
Dec 24 11:39:49.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784387, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 11:39:51.260: INFO: all replica sets need to contain the pod-template-hash label
Dec 24 11:39:51.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784387, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 11:39:53.272: INFO: all replica sets need to contain the pod-template-hash label
Dec 24 11:39:53.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784387, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 11:39:55.306: INFO: all replica sets need to contain the pod-template-hash label
Dec 24 11:39:55.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784387, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 11:39:57.272: INFO: all replica sets need to contain the pod-template-hash label
Dec 24 11:39:57.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784387, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 11:39:59.270: INFO: all replica sets need to contain the pod-template-hash label
Dec 24 11:39:59.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784397, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 11:40:01.297: INFO: all replica sets need to contain the pod-template-hash label
Dec 24 11:40:01.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784397, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 11:40:03.320: INFO: all replica sets need to contain the pod-template-hash label
Dec 24 11:40:03.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784397, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 11:40:05.280: INFO: all replica sets need to contain the pod-template-hash label
Dec 24 11:40:05.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784397, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 11:40:07.257: INFO: all replica sets need to contain the pod-template-hash label
Dec 24 11:40:07.257: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784397, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712784383, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 11:40:09.441: INFO: 
Dec 24 11:40:09.441: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 24 11:40:09.511: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-n8r65,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-n8r65/deployments/test-rollover-deployment,UID:13d6c715-2642-11ea-a994-fa163e34d433,ResourceVersion:15896259,Generation:2,CreationTimestamp:2019-12-24 11:39:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-24 11:39:43 +0000 UTC 2019-12-24 11:39:43 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-24 11:40:08 +0000 UTC 2019-12-24 11:39:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 24 11:40:09.517: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-n8r65,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-n8r65/replicasets/test-rollover-deployment-5b8479fdb6,UID:15e35e03-2642-11ea-a994-fa163e34d433,ResourceVersion:15896250,Generation:2,CreationTimestamp:2019-12-24 11:39:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 13d6c715-2642-11ea-a994-fa163e34d433 0xc0020b1b27 0xc0020b1b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 24 11:40:09.517: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 24 11:40:09.517: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-n8r65,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-n8r65/replicasets/test-rollover-controller,UID:0c9a5200-2642-11ea-a994-fa163e34d433,ResourceVersion:15896258,Generation:2,CreationTimestamp:2019-12-24 11:39:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 13d6c715-2642-11ea-a994-fa163e34d433 0xc0020b119f 0xc0020b1210}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 24 11:40:09.517: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-n8r65,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-n8r65/replicasets/test-rollover-deployment-58494b7559,UID:13f62011-2642-11ea-a994-fa163e34d433,ResourceVersion:15896217,Generation:2,CreationTimestamp:2019-12-24 11:39:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 13d6c715-2642-11ea-a994-fa163e34d433 0xc0020b1607 0xc0020b1608}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 24 11:40:09.524: INFO: Pod "test-rollover-deployment-5b8479fdb6-x2rgv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-x2rgv,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-n8r65,SelfLink:/api/v1/namespaces/e2e-tests-deployment-n8r65/pods/test-rollover-deployment-5b8479fdb6-x2rgv,UID:161bd3cf-2642-11ea-a994-fa163e34d433,ResourceVersion:15896235,Generation:0,CreationTimestamp:2019-12-24 11:39:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 15e35e03-2642-11ea-a994-fa163e34d433 0xc001b0cdb7 0xc001b0cdb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dwgpj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dwgpj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-dwgpj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b0ce20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b0ce40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:39:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:39:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:39:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:39:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-24 11:39:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-24 11:39:57 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://199f88809acb0e38b9809d1c5650c5e35c4df0fc327760745a4c2c0c8c90471d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:40:09.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-n8r65" for this suite.
Dec 24 11:40:17.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:40:17.895: INFO: namespace: e2e-tests-deployment-n8r65, resource: bindings, ignored listing per whitelist
Dec 24 11:40:17.897: INFO: namespace e2e-tests-deployment-n8r65 deletion completed in 8.26014356s

• [SLOW TEST:47.134 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:40:17.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 11:40:19.133: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:40:20.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-zqql9" for this suite.
Dec 24 11:40:26.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:40:26.619: INFO: namespace: e2e-tests-custom-resource-definition-zqql9, resource: bindings, ignored listing per whitelist
Dec 24 11:40:26.642: INFO: namespace e2e-tests-custom-resource-definition-zqql9 deletion completed in 6.189866105s

• [SLOW TEST:8.745 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:40:26.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 24 11:40:26.866: INFO: Waiting up to 5m0s for pod "downward-api-2decef9f-2642-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-6c889" to be "success or failure"
Dec 24 11:40:26.883: INFO: Pod "downward-api-2decef9f-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.421504ms
Dec 24 11:40:28.908: INFO: Pod "downward-api-2decef9f-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041716008s
Dec 24 11:40:30.930: INFO: Pod "downward-api-2decef9f-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064229018s
Dec 24 11:40:32.984: INFO: Pod "downward-api-2decef9f-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117534912s
Dec 24 11:40:35.022: INFO: Pod "downward-api-2decef9f-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155442979s
Dec 24 11:40:37.039: INFO: Pod "downward-api-2decef9f-2642-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.172950396s
STEP: Saw pod success
Dec 24 11:40:37.039: INFO: Pod "downward-api-2decef9f-2642-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:40:37.046: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-2decef9f-2642-11ea-b7c4-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 24 11:40:37.163: INFO: Waiting for pod downward-api-2decef9f-2642-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:40:37.233: INFO: Pod downward-api-2decef9f-2642-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:40:37.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6c889" for this suite.
Dec 24 11:40:43.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:40:44.254: INFO: namespace: e2e-tests-downward-api-6c889, resource: bindings, ignored listing per whitelist
Dec 24 11:40:44.265: INFO: namespace e2e-tests-downward-api-6c889 deletion completed in 7.022164776s

• [SLOW TEST:17.623 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:40:44.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-2vf9h/configmap-test-388347c1-2642-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 11:40:44.634: INFO: Waiting up to 5m0s for pod "pod-configmaps-388680af-2642-11ea-b7c4-0242ac110005" in namespace "e2e-tests-configmap-2vf9h" to be "success or failure"
Dec 24 11:40:44.773: INFO: Pod "pod-configmaps-388680af-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 139.59668ms
Dec 24 11:40:47.175: INFO: Pod "pod-configmaps-388680af-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.541470576s
Dec 24 11:40:49.192: INFO: Pod "pod-configmaps-388680af-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.558648615s
Dec 24 11:40:51.239: INFO: Pod "pod-configmaps-388680af-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.604908407s
Dec 24 11:40:53.254: INFO: Pod "pod-configmaps-388680af-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.620591808s
Dec 24 11:40:55.266: INFO: Pod "pod-configmaps-388680af-2642-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.63247353s
STEP: Saw pod success
Dec 24 11:40:55.266: INFO: Pod "pod-configmaps-388680af-2642-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:40:55.279: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-388680af-2642-11ea-b7c4-0242ac110005 container env-test: 
STEP: delete the pod
Dec 24 11:40:55.491: INFO: Waiting for pod pod-configmaps-388680af-2642-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:40:55.510: INFO: Pod pod-configmaps-388680af-2642-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:40:55.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2vf9h" for this suite.
Dec 24 11:41:01.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:41:01.757: INFO: namespace: e2e-tests-configmap-2vf9h, resource: bindings, ignored listing per whitelist
Dec 24 11:41:01.763: INFO: namespace e2e-tests-configmap-2vf9h deletion completed in 6.239344591s

• [SLOW TEST:17.498 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:41:01.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 11:41:02.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42dfc69b-2642-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-tpwmv" to be "success or failure"
Dec 24 11:41:02.116: INFO: Pod "downwardapi-volume-42dfc69b-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 108.653046ms
Dec 24 11:41:04.138: INFO: Pod "downwardapi-volume-42dfc69b-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130936747s
Dec 24 11:41:06.174: INFO: Pod "downwardapi-volume-42dfc69b-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166187548s
Dec 24 11:41:08.201: INFO: Pod "downwardapi-volume-42dfc69b-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193216939s
Dec 24 11:41:10.216: INFO: Pod "downwardapi-volume-42dfc69b-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208572419s
Dec 24 11:41:12.232: INFO: Pod "downwardapi-volume-42dfc69b-2642-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224318341s
STEP: Saw pod success
Dec 24 11:41:12.232: INFO: Pod "downwardapi-volume-42dfc69b-2642-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:41:12.245: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-42dfc69b-2642-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 11:41:12.332: INFO: Waiting for pod downwardapi-volume-42dfc69b-2642-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:41:12.351: INFO: Pod downwardapi-volume-42dfc69b-2642-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:41:12.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tpwmv" for this suite.
Dec 24 11:41:18.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:41:18.662: INFO: namespace: e2e-tests-downward-api-tpwmv, resource: bindings, ignored listing per whitelist
Dec 24 11:41:18.680: INFO: namespace e2e-tests-downward-api-tpwmv deletion completed in 6.303071302s

• [SLOW TEST:16.916 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:41:18.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1224 11:41:49.615228       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 11:41:49.615: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:41:49.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-s8q6r" for this suite.
Dec 24 11:42:00.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:42:00.283: INFO: namespace: e2e-tests-gc-s8q6r, resource: bindings, ignored listing per whitelist
Dec 24 11:42:00.923: INFO: namespace e2e-tests-gc-s8q6r deletion completed in 11.300897412s

• [SLOW TEST:42.243 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:42:00.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Dec 24 11:42:01.668: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix070493386/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:42:01.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2nfgj" for this suite.
Dec 24 11:42:07.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:42:07.990: INFO: namespace: e2e-tests-kubectl-2nfgj, resource: bindings, ignored listing per whitelist
Dec 24 11:42:08.029: INFO: namespace e2e-tests-kubectl-2nfgj deletion completed in 6.217632185s

• [SLOW TEST:7.105 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:42:08.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-6gf52
I1224 11:42:08.261190       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-6gf52, replica count: 1
I1224 11:42:09.312122       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 11:42:10.313045       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 11:42:11.313625       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 11:42:12.314212       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 11:42:13.314759       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 11:42:14.315199       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 11:42:15.316099       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 11:42:16.316698       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 11:42:17.317322       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 11:42:18.317816       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 24 11:42:18.495: INFO: Created: latency-svc-jtzz5
Dec 24 11:42:18.640: INFO: Got endpoints: latency-svc-jtzz5 [222.477756ms]
Dec 24 11:42:18.778: INFO: Created: latency-svc-nfgks
Dec 24 11:42:18.790: INFO: Created: latency-svc-2p6ph
Dec 24 11:42:18.792: INFO: Got endpoints: latency-svc-nfgks [150.077286ms]
Dec 24 11:42:18.810: INFO: Got endpoints: latency-svc-2p6ph [167.812149ms]
Dec 24 11:42:19.012: INFO: Created: latency-svc-hjckg
Dec 24 11:42:19.048: INFO: Got endpoints: latency-svc-hjckg [405.694987ms]
Dec 24 11:42:19.213: INFO: Created: latency-svc-gxxhd
Dec 24 11:42:19.252: INFO: Got endpoints: latency-svc-gxxhd [610.820245ms]
Dec 24 11:42:19.430: INFO: Created: latency-svc-x6g77
Dec 24 11:42:19.686: INFO: Got endpoints: latency-svc-x6g77 [1.044530143s]
Dec 24 11:42:19.695: INFO: Created: latency-svc-twlm9
Dec 24 11:42:19.724: INFO: Got endpoints: latency-svc-twlm9 [1.08233435s]
Dec 24 11:42:19.900: INFO: Created: latency-svc-2j2dd
Dec 24 11:42:19.926: INFO: Got endpoints: latency-svc-2j2dd [1.283061794s]
Dec 24 11:42:19.976: INFO: Created: latency-svc-rf9vz
Dec 24 11:42:20.103: INFO: Got endpoints: latency-svc-rf9vz [1.459963377s]
Dec 24 11:42:20.118: INFO: Created: latency-svc-4rjd6
Dec 24 11:42:20.135: INFO: Got endpoints: latency-svc-4rjd6 [1.492640347s]
Dec 24 11:42:20.158: INFO: Created: latency-svc-nx6x2
Dec 24 11:42:20.174: INFO: Got endpoints: latency-svc-nx6x2 [1.533106042s]
Dec 24 11:42:20.358: INFO: Created: latency-svc-mc94f
Dec 24 11:42:20.387: INFO: Got endpoints: latency-svc-mc94f [1.744357365s]
Dec 24 11:42:20.598: INFO: Created: latency-svc-2wk8l
Dec 24 11:42:20.607: INFO: Got endpoints: latency-svc-2wk8l [1.964901467s]
Dec 24 11:42:20.668: INFO: Created: latency-svc-kr4gq
Dec 24 11:42:20.788: INFO: Got endpoints: latency-svc-kr4gq [2.145501459s]
Dec 24 11:42:20.810: INFO: Created: latency-svc-rxv2t
Dec 24 11:42:20.822: INFO: Got endpoints: latency-svc-rxv2t [2.179098264s]
Dec 24 11:42:20.897: INFO: Created: latency-svc-d9lxq
Dec 24 11:42:21.009: INFO: Got endpoints: latency-svc-d9lxq [2.366602256s]
Dec 24 11:42:21.037: INFO: Created: latency-svc-jh6tg
Dec 24 11:42:21.060: INFO: Got endpoints: latency-svc-jh6tg [2.267662616s]
Dec 24 11:42:21.427: INFO: Created: latency-svc-r8frd
Dec 24 11:42:21.444: INFO: Got endpoints: latency-svc-r8frd [2.632848143s]
Dec 24 11:42:21.457: INFO: Created: latency-svc-r29hs
Dec 24 11:42:21.471: INFO: Got endpoints: latency-svc-r29hs [2.423482592s]
Dec 24 11:42:21.677: INFO: Created: latency-svc-t5zvf
Dec 24 11:42:21.688: INFO: Got endpoints: latency-svc-t5zvf [2.435853898s]
Dec 24 11:42:21.887: INFO: Created: latency-svc-k6tvh
Dec 24 11:42:21.920: INFO: Got endpoints: latency-svc-k6tvh [2.233928465s]
Dec 24 11:42:21.964: INFO: Created: latency-svc-v4ms9
Dec 24 11:42:22.148: INFO: Got endpoints: latency-svc-v4ms9 [2.423907131s]
Dec 24 11:42:22.200: INFO: Created: latency-svc-xzkgl
Dec 24 11:42:22.201: INFO: Got endpoints: latency-svc-xzkgl [2.274765s]
Dec 24 11:42:22.360: INFO: Created: latency-svc-cqnl8
Dec 24 11:42:22.385: INFO: Got endpoints: latency-svc-cqnl8 [2.282118701s]
Dec 24 11:42:22.440: INFO: Created: latency-svc-qn7zl
Dec 24 11:42:22.661: INFO: Got endpoints: latency-svc-qn7zl [2.525223308s]
Dec 24 11:42:22.720: INFO: Created: latency-svc-nkn4s
Dec 24 11:42:22.748: INFO: Got endpoints: latency-svc-nkn4s [2.573603404s]
Dec 24 11:42:22.912: INFO: Created: latency-svc-46ggd
Dec 24 11:42:22.950: INFO: Got endpoints: latency-svc-46ggd [2.562829582s]
Dec 24 11:42:22.981: INFO: Created: latency-svc-75bvw
Dec 24 11:42:23.156: INFO: Got endpoints: latency-svc-75bvw [2.548262208s]
Dec 24 11:42:23.173: INFO: Created: latency-svc-g645p
Dec 24 11:42:23.216: INFO: Got endpoints: latency-svc-g645p [2.427774525s]
Dec 24 11:42:23.351: INFO: Created: latency-svc-kcxsb
Dec 24 11:42:23.364: INFO: Got endpoints: latency-svc-kcxsb [2.541621502s]
Dec 24 11:42:23.605: INFO: Created: latency-svc-vh85k
Dec 24 11:42:23.631: INFO: Got endpoints: latency-svc-vh85k [2.621359749s]
Dec 24 11:42:23.787: INFO: Created: latency-svc-2cbkq
Dec 24 11:42:23.814: INFO: Got endpoints: latency-svc-2cbkq [2.753303036s]
Dec 24 11:42:24.039: INFO: Created: latency-svc-rxxmm
Dec 24 11:42:24.067: INFO: Got endpoints: latency-svc-rxxmm [2.623446693s]
Dec 24 11:42:24.109: INFO: Created: latency-svc-jhnzn
Dec 24 11:42:24.139: INFO: Got endpoints: latency-svc-jhnzn [2.666627684s]
Dec 24 11:42:24.311: INFO: Created: latency-svc-6vgxz
Dec 24 11:42:24.322: INFO: Got endpoints: latency-svc-6vgxz [2.633084592s]
Dec 24 11:42:24.564: INFO: Created: latency-svc-wngf5
Dec 24 11:42:24.580: INFO: Got endpoints: latency-svc-wngf5 [2.659098051s]
Dec 24 11:42:24.738: INFO: Created: latency-svc-jw9bb
Dec 24 11:42:24.751: INFO: Got endpoints: latency-svc-jw9bb [2.602265104s]
Dec 24 11:42:24.836: INFO: Created: latency-svc-zzgb8
Dec 24 11:42:25.139: INFO: Got endpoints: latency-svc-zzgb8 [2.938091503s]
Dec 24 11:42:25.172: INFO: Created: latency-svc-2m6bm
Dec 24 11:42:25.185: INFO: Got endpoints: latency-svc-2m6bm [2.79941941s]
Dec 24 11:42:26.200: INFO: Created: latency-svc-84qpn
Dec 24 11:42:26.339: INFO: Got endpoints: latency-svc-84qpn [3.678208967s]
Dec 24 11:42:26.370: INFO: Created: latency-svc-drghd
Dec 24 11:42:26.403: INFO: Got endpoints: latency-svc-drghd [3.653836089s]
Dec 24 11:42:26.592: INFO: Created: latency-svc-fpb2t
Dec 24 11:42:26.623: INFO: Got endpoints: latency-svc-fpb2t [3.672843022s]
Dec 24 11:42:26.666: INFO: Created: latency-svc-w4v7c
Dec 24 11:42:26.791: INFO: Got endpoints: latency-svc-w4v7c [387.684986ms]
Dec 24 11:42:26.809: INFO: Created: latency-svc-mtthm
Dec 24 11:42:26.833: INFO: Got endpoints: latency-svc-mtthm [3.676285112s]
Dec 24 11:42:27.100: INFO: Created: latency-svc-rcrds
Dec 24 11:42:27.110: INFO: Got endpoints: latency-svc-rcrds [3.893324444s]
Dec 24 11:42:27.241: INFO: Created: latency-svc-gflbn
Dec 24 11:42:27.256: INFO: Got endpoints: latency-svc-gflbn [3.891488278s]
Dec 24 11:42:27.508: INFO: Created: latency-svc-x4n5l
Dec 24 11:42:27.559: INFO: Got endpoints: latency-svc-x4n5l [3.9280644s]
Dec 24 11:42:27.774: INFO: Created: latency-svc-ljpds
Dec 24 11:42:27.835: INFO: Got endpoints: latency-svc-ljpds [4.021323645s]
Dec 24 11:42:27.952: INFO: Created: latency-svc-hpdl6
Dec 24 11:42:27.969: INFO: Got endpoints: latency-svc-hpdl6 [3.901251631s]
Dec 24 11:42:28.091: INFO: Created: latency-svc-ssk24
Dec 24 11:42:28.106: INFO: Got endpoints: latency-svc-ssk24 [3.967162009s]
Dec 24 11:42:28.383: INFO: Created: latency-svc-42v4x
Dec 24 11:42:28.589: INFO: Got endpoints: latency-svc-42v4x [4.267476165s]
Dec 24 11:42:28.962: INFO: Created: latency-svc-2lz2v
Dec 24 11:42:29.173: INFO: Got endpoints: latency-svc-2lz2v [4.593388594s]
Dec 24 11:42:29.376: INFO: Created: latency-svc-vx59n
Dec 24 11:42:29.383: INFO: Got endpoints: latency-svc-vx59n [4.632381362s]
Dec 24 11:42:29.554: INFO: Created: latency-svc-zp5wx
Dec 24 11:42:29.581: INFO: Got endpoints: latency-svc-zp5wx [4.442203342s]
Dec 24 11:42:29.790: INFO: Created: latency-svc-8w4wg
Dec 24 11:42:29.863: INFO: Got endpoints: latency-svc-8w4wg [4.678644091s]
Dec 24 11:42:29.896: INFO: Created: latency-svc-hd2bh
Dec 24 11:42:30.113: INFO: Got endpoints: latency-svc-hd2bh [3.772827792s]
Dec 24 11:42:30.334: INFO: Created: latency-svc-rbmlp
Dec 24 11:42:30.389: INFO: Got endpoints: latency-svc-rbmlp [3.765331983s]
Dec 24 11:42:30.572: INFO: Created: latency-svc-rvh5t
Dec 24 11:42:30.758: INFO: Got endpoints: latency-svc-rvh5t [3.967009797s]
Dec 24 11:42:30.777: INFO: Created: latency-svc-d6ll5
Dec 24 11:42:30.783: INFO: Got endpoints: latency-svc-d6ll5 [3.950236378s]
Dec 24 11:42:30.980: INFO: Created: latency-svc-dv5mw
Dec 24 11:42:30.996: INFO: Got endpoints: latency-svc-dv5mw [3.885724549s]
Dec 24 11:42:31.142: INFO: Created: latency-svc-68xnz
Dec 24 11:42:31.212: INFO: Got endpoints: latency-svc-68xnz [3.956497674s]
Dec 24 11:42:31.370: INFO: Created: latency-svc-hhxfx
Dec 24 11:42:31.404: INFO: Got endpoints: latency-svc-hhxfx [3.844261484s]
Dec 24 11:42:31.596: INFO: Created: latency-svc-rnp64
Dec 24 11:42:31.629: INFO: Got endpoints: latency-svc-rnp64 [3.793136748s]
Dec 24 11:42:31.817: INFO: Created: latency-svc-wqths
Dec 24 11:42:31.831: INFO: Got endpoints: latency-svc-wqths [3.86165448s]
Dec 24 11:42:32.021: INFO: Created: latency-svc-8j2gc
Dec 24 11:42:32.043: INFO: Got endpoints: latency-svc-8j2gc [3.936675815s]
Dec 24 11:42:32.209: INFO: Created: latency-svc-4fnzs
Dec 24 11:42:32.222: INFO: Got endpoints: latency-svc-4fnzs [3.632177746s]
Dec 24 11:42:32.350: INFO: Created: latency-svc-vq2tj
Dec 24 11:42:32.378: INFO: Got endpoints: latency-svc-vq2tj [3.20427545s]
Dec 24 11:42:32.645: INFO: Created: latency-svc-d5rnx
Dec 24 11:42:32.661: INFO: Got endpoints: latency-svc-d5rnx [3.277671236s]
Dec 24 11:42:32.994: INFO: Created: latency-svc-5894l
Dec 24 11:42:33.020: INFO: Got endpoints: latency-svc-5894l [3.438490106s]
Dec 24 11:42:33.163: INFO: Created: latency-svc-c42kd
Dec 24 11:42:33.185: INFO: Got endpoints: latency-svc-c42kd [3.321083358s]
Dec 24 11:42:33.224: INFO: Created: latency-svc-j8pxn
Dec 24 11:42:33.391: INFO: Got endpoints: latency-svc-j8pxn [3.277783567s]
Dec 24 11:42:33.421: INFO: Created: latency-svc-ftdf8
Dec 24 11:42:33.465: INFO: Got endpoints: latency-svc-ftdf8 [3.076433605s]
Dec 24 11:42:33.473: INFO: Created: latency-svc-zfs4q
Dec 24 11:42:33.613: INFO: Got endpoints: latency-svc-zfs4q [2.854961822s]
Dec 24 11:42:33.661: INFO: Created: latency-svc-s8bp4
Dec 24 11:42:33.681: INFO: Got endpoints: latency-svc-s8bp4 [2.89725416s]
Dec 24 11:42:33.969: INFO: Created: latency-svc-hn5ks
Dec 24 11:42:33.970: INFO: Got endpoints: latency-svc-hn5ks [2.973441881s]
Dec 24 11:42:34.220: INFO: Created: latency-svc-f2jms
Dec 24 11:42:34.265: INFO: Got endpoints: latency-svc-f2jms [3.052165781s]
Dec 24 11:42:34.318: INFO: Created: latency-svc-f6gfg
Dec 24 11:42:34.462: INFO: Got endpoints: latency-svc-f6gfg [3.057945336s]
Dec 24 11:42:34.629: INFO: Created: latency-svc-6v5ph
Dec 24 11:42:34.648: INFO: Got endpoints: latency-svc-6v5ph [3.018826323s]
Dec 24 11:42:34.848: INFO: Created: latency-svc-jkwwv
Dec 24 11:42:34.883: INFO: Got endpoints: latency-svc-jkwwv [3.051839567s]
Dec 24 11:42:35.037: INFO: Created: latency-svc-ggvdq
Dec 24 11:42:35.043: INFO: Got endpoints: latency-svc-ggvdq [2.999909783s]
Dec 24 11:42:35.188: INFO: Created: latency-svc-m2cck
Dec 24 11:42:35.203: INFO: Got endpoints: latency-svc-m2cck [2.980822243s]
Dec 24 11:42:35.338: INFO: Created: latency-svc-rxthv
Dec 24 11:42:35.415: INFO: Got endpoints: latency-svc-rxthv [3.036980043s]
Dec 24 11:42:35.532: INFO: Created: latency-svc-4p78d
Dec 24 11:42:35.553: INFO: Got endpoints: latency-svc-4p78d [2.891118491s]
Dec 24 11:42:35.762: INFO: Created: latency-svc-qnrk4
Dec 24 11:42:35.763: INFO: Got endpoints: latency-svc-qnrk4 [2.742194935s]
Dec 24 11:42:35.813: INFO: Created: latency-svc-drgdb
Dec 24 11:42:35.904: INFO: Got endpoints: latency-svc-drgdb [2.718275196s]
Dec 24 11:42:35.991: INFO: Created: latency-svc-h5f4q
Dec 24 11:42:36.050: INFO: Got endpoints: latency-svc-h5f4q [2.658875088s]
Dec 24 11:42:36.066: INFO: Created: latency-svc-6wthr
Dec 24 11:42:36.082: INFO: Got endpoints: latency-svc-6wthr [2.616461086s]
Dec 24 11:42:36.148: INFO: Created: latency-svc-hw9h8
Dec 24 11:42:36.265: INFO: Got endpoints: latency-svc-hw9h8 [2.651192726s]
Dec 24 11:42:36.289: INFO: Created: latency-svc-q2nbf
Dec 24 11:42:36.298: INFO: Got endpoints: latency-svc-q2nbf [2.617536353s]
Dec 24 11:42:36.355: INFO: Created: latency-svc-tzjn6
Dec 24 11:42:36.598: INFO: Got endpoints: latency-svc-tzjn6 [2.627999249s]
Dec 24 11:42:36.598: INFO: Created: latency-svc-cn5lh
Dec 24 11:42:36.645: INFO: Got endpoints: latency-svc-cn5lh [2.380424286s]
Dec 24 11:42:36.693: INFO: Created: latency-svc-hvsdp
Dec 24 11:42:36.826: INFO: Got endpoints: latency-svc-hvsdp [2.363079058s]
Dec 24 11:42:36.901: INFO: Created: latency-svc-2vqqf
Dec 24 11:42:37.023: INFO: Got endpoints: latency-svc-2vqqf [2.375714886s]
Dec 24 11:42:37.072: INFO: Created: latency-svc-2nmsh
Dec 24 11:42:37.112: INFO: Got endpoints: latency-svc-2nmsh [2.22927279s]
Dec 24 11:42:37.261: INFO: Created: latency-svc-tk9q2
Dec 24 11:42:37.271: INFO: Got endpoints: latency-svc-tk9q2 [2.227691774s]
Dec 24 11:42:37.351: INFO: Created: latency-svc-2mhs2
Dec 24 11:42:37.491: INFO: Got endpoints: latency-svc-2mhs2 [2.288319403s]
Dec 24 11:42:37.492: INFO: Created: latency-svc-t9d6h
Dec 24 11:42:37.507: INFO: Got endpoints: latency-svc-t9d6h [2.092076282s]
Dec 24 11:42:37.668: INFO: Created: latency-svc-4qdmq
Dec 24 11:42:38.029: INFO: Got endpoints: latency-svc-4qdmq [2.476459881s]
Dec 24 11:42:38.262: INFO: Created: latency-svc-kzf88
Dec 24 11:42:38.667: INFO: Got endpoints: latency-svc-kzf88 [2.904744233s]
Dec 24 11:42:38.670: INFO: Created: latency-svc-vqvdk
Dec 24 11:42:38.682: INFO: Got endpoints: latency-svc-vqvdk [2.778020833s]
Dec 24 11:42:38.855: INFO: Created: latency-svc-xvrr6
Dec 24 11:42:38.855: INFO: Got endpoints: latency-svc-xvrr6 [2.805223775s]
Dec 24 11:42:38.898: INFO: Created: latency-svc-8bd4x
Dec 24 11:42:38.916: INFO: Got endpoints: latency-svc-8bd4x [2.833764406s]
Dec 24 11:42:39.037: INFO: Created: latency-svc-9gnsr
Dec 24 11:42:39.052: INFO: Got endpoints: latency-svc-9gnsr [2.786762413s]
Dec 24 11:42:39.272: INFO: Created: latency-svc-lfsjj
Dec 24 11:42:39.310: INFO: Got endpoints: latency-svc-lfsjj [3.011093209s]
Dec 24 11:42:39.457: INFO: Created: latency-svc-pfkf9
Dec 24 11:42:39.472: INFO: Got endpoints: latency-svc-pfkf9 [2.873696811s]
Dec 24 11:42:39.519: INFO: Created: latency-svc-tn2fq
Dec 24 11:42:39.685: INFO: Got endpoints: latency-svc-tn2fq [3.038776505s]
Dec 24 11:42:39.713: INFO: Created: latency-svc-drcl5
Dec 24 11:42:39.763: INFO: Got endpoints: latency-svc-drcl5 [2.937100952s]
Dec 24 11:42:39.784: INFO: Created: latency-svc-f8g6w
Dec 24 11:42:39.886: INFO: Got endpoints: latency-svc-f8g6w [2.861860281s]
Dec 24 11:42:39.910: INFO: Created: latency-svc-gnkh5
Dec 24 11:42:39.952: INFO: Got endpoints: latency-svc-gnkh5 [2.83985249s]
Dec 24 11:42:40.095: INFO: Created: latency-svc-99ztj
Dec 24 11:42:40.101: INFO: Got endpoints: latency-svc-99ztj [2.829916151s]
Dec 24 11:42:40.160: INFO: Created: latency-svc-rcgvf
Dec 24 11:42:40.244: INFO: Got endpoints: latency-svc-rcgvf [2.752434322s]
Dec 24 11:42:40.287: INFO: Created: latency-svc-jtjz5
Dec 24 11:42:40.303: INFO: Got endpoints: latency-svc-jtjz5 [2.795284681s]
Dec 24 11:42:40.342: INFO: Created: latency-svc-glbxv
Dec 24 11:42:40.550: INFO: Got endpoints: latency-svc-glbxv [2.520432126s]
Dec 24 11:42:40.643: INFO: Created: latency-svc-fl69l
Dec 24 11:42:40.761: INFO: Got endpoints: latency-svc-fl69l [2.093499496s]
Dec 24 11:42:40.881: INFO: Created: latency-svc-x9n7t
Dec 24 11:42:41.083: INFO: Got endpoints: latency-svc-x9n7t [2.401077565s]
Dec 24 11:42:41.095: INFO: Created: latency-svc-xqd6l
Dec 24 11:42:41.114: INFO: Got endpoints: latency-svc-xqd6l [2.258732613s]
Dec 24 11:42:41.325: INFO: Created: latency-svc-48gjt
Dec 24 11:42:41.325: INFO: Got endpoints: latency-svc-48gjt [2.40898725s]
Dec 24 11:42:41.493: INFO: Created: latency-svc-zgpcg
Dec 24 11:42:41.582: INFO: Got endpoints: latency-svc-zgpcg [2.53014988s]
Dec 24 11:42:41.721: INFO: Created: latency-svc-nf9cr
Dec 24 11:42:41.731: INFO: Got endpoints: latency-svc-nf9cr [2.421021594s]
Dec 24 11:42:41.897: INFO: Created: latency-svc-8kfgw
Dec 24 11:42:41.929: INFO: Got endpoints: latency-svc-8kfgw [2.4577483s]
Dec 24 11:42:42.002: INFO: Created: latency-svc-jhp9v
Dec 24 11:42:42.129: INFO: Got endpoints: latency-svc-jhp9v [2.444486055s]
Dec 24 11:42:42.213: INFO: Created: latency-svc-pqzht
Dec 24 11:42:42.287: INFO: Got endpoints: latency-svc-pqzht [2.523703045s]
Dec 24 11:42:42.348: INFO: Created: latency-svc-9jxwr
Dec 24 11:42:42.557: INFO: Got endpoints: latency-svc-9jxwr [2.67083098s]
Dec 24 11:42:42.595: INFO: Created: latency-svc-6stxl
Dec 24 11:42:42.619: INFO: Got endpoints: latency-svc-6stxl [2.666213301s]
Dec 24 11:42:42.738: INFO: Created: latency-svc-kkf4b
Dec 24 11:42:42.750: INFO: Got endpoints: latency-svc-kkf4b [2.648834409s]
Dec 24 11:42:42.828: INFO: Created: latency-svc-d85q9
Dec 24 11:42:42.972: INFO: Got endpoints: latency-svc-d85q9 [2.727191541s]
Dec 24 11:42:43.019: INFO: Created: latency-svc-5r8nl
Dec 24 11:42:43.029: INFO: Got endpoints: latency-svc-5r8nl [2.726345561s]
Dec 24 11:42:43.189: INFO: Created: latency-svc-bgzjv
Dec 24 11:42:43.535: INFO: Created: latency-svc-pg665
Dec 24 11:42:43.543: INFO: Got endpoints: latency-svc-bgzjv [2.992854235s]
Dec 24 11:42:43.547: INFO: Got endpoints: latency-svc-pg665 [2.785566334s]
Dec 24 11:42:43.844: INFO: Created: latency-svc-ms9kh
Dec 24 11:42:43.858: INFO: Got endpoints: latency-svc-ms9kh [2.774797301s]
Dec 24 11:42:44.148: INFO: Created: latency-svc-stj98
Dec 24 11:42:44.164: INFO: Got endpoints: latency-svc-stj98 [3.049681801s]
Dec 24 11:42:44.279: INFO: Created: latency-svc-wssnk
Dec 24 11:42:44.305: INFO: Got endpoints: latency-svc-wssnk [2.980049207s]
Dec 24 11:42:44.492: INFO: Created: latency-svc-z4cqt
Dec 24 11:42:44.526: INFO: Got endpoints: latency-svc-z4cqt [2.944281738s]
Dec 24 11:42:44.662: INFO: Created: latency-svc-j2bpw
Dec 24 11:42:44.686: INFO: Got endpoints: latency-svc-j2bpw [2.954944472s]
Dec 24 11:42:44.858: INFO: Created: latency-svc-r9kzp
Dec 24 11:42:44.900: INFO: Got endpoints: latency-svc-r9kzp [2.970230578s]
Dec 24 11:42:45.053: INFO: Created: latency-svc-vmbpj
Dec 24 11:42:45.053: INFO: Got endpoints: latency-svc-vmbpj [2.923484539s]
Dec 24 11:42:45.192: INFO: Created: latency-svc-48vr8
Dec 24 11:42:45.201: INFO: Got endpoints: latency-svc-48vr8 [2.913657474s]
Dec 24 11:42:45.289: INFO: Created: latency-svc-wpddn
Dec 24 11:42:45.382: INFO: Got endpoints: latency-svc-wpddn [2.824847088s]
Dec 24 11:42:45.440: INFO: Created: latency-svc-b6rdv
Dec 24 11:42:45.464: INFO: Got endpoints: latency-svc-b6rdv [2.844841156s]
Dec 24 11:42:45.590: INFO: Created: latency-svc-kw5wm
Dec 24 11:42:45.603: INFO: Got endpoints: latency-svc-kw5wm [2.852399487s]
Dec 24 11:42:46.741: INFO: Created: latency-svc-sc52j
Dec 24 11:42:46.797: INFO: Got endpoints: latency-svc-sc52j [3.824446096s]
Dec 24 11:42:46.958: INFO: Created: latency-svc-q85fz
Dec 24 11:42:46.989: INFO: Got endpoints: latency-svc-q85fz [3.959372466s]
Dec 24 11:42:47.120: INFO: Created: latency-svc-74c2k
Dec 24 11:42:47.167: INFO: Got endpoints: latency-svc-74c2k [3.61938699s]
Dec 24 11:42:47.337: INFO: Created: latency-svc-cfblq
Dec 24 11:42:47.353: INFO: Got endpoints: latency-svc-cfblq [3.809129839s]
Dec 24 11:42:47.507: INFO: Created: latency-svc-k2jtb
Dec 24 11:42:47.516: INFO: Got endpoints: latency-svc-k2jtb [3.657341113s]
Dec 24 11:42:47.613: INFO: Created: latency-svc-wjx8k
Dec 24 11:42:47.743: INFO: Got endpoints: latency-svc-wjx8k [3.578901917s]
Dec 24 11:42:47.771: INFO: Created: latency-svc-l7s6d
Dec 24 11:42:47.914: INFO: Got endpoints: latency-svc-l7s6d [3.608290668s]
Dec 24 11:42:47.946: INFO: Created: latency-svc-cc94d
Dec 24 11:42:47.956: INFO: Got endpoints: latency-svc-cc94d [3.429578354s]
Dec 24 11:42:48.105: INFO: Created: latency-svc-bcdx7
Dec 24 11:42:48.112: INFO: Got endpoints: latency-svc-bcdx7 [3.425939271s]
Dec 24 11:42:48.176: INFO: Created: latency-svc-sp2pg
Dec 24 11:42:48.256: INFO: Got endpoints: latency-svc-sp2pg [3.355264392s]
Dec 24 11:42:48.281: INFO: Created: latency-svc-rj64l
Dec 24 11:42:48.308: INFO: Got endpoints: latency-svc-rj64l [3.25444517s]
Dec 24 11:42:48.505: INFO: Created: latency-svc-qn75m
Dec 24 11:42:48.708: INFO: Got endpoints: latency-svc-qn75m [3.506944177s]
Dec 24 11:42:48.731: INFO: Created: latency-svc-wfxd5
Dec 24 11:42:48.768: INFO: Got endpoints: latency-svc-wfxd5 [3.384983503s]
Dec 24 11:42:48.974: INFO: Created: latency-svc-6w9b7
Dec 24 11:42:49.125: INFO: Got endpoints: latency-svc-6w9b7 [3.660941912s]
Dec 24 11:42:49.259: INFO: Created: latency-svc-nnzxg
Dec 24 11:42:49.398: INFO: Got endpoints: latency-svc-nnzxg [3.794820127s]
Dec 24 11:42:49.421: INFO: Created: latency-svc-5kk84
Dec 24 11:42:49.470: INFO: Got endpoints: latency-svc-5kk84 [2.67288836s]
Dec 24 11:42:49.586: INFO: Created: latency-svc-rw5tb
Dec 24 11:42:49.597: INFO: Got endpoints: latency-svc-rw5tb [2.607135702s]
Dec 24 11:42:49.837: INFO: Created: latency-svc-sph98
Dec 24 11:42:49.889: INFO: Created: latency-svc-klnnn
Dec 24 11:42:49.906: INFO: Got endpoints: latency-svc-klnnn [2.552409566s]
Dec 24 11:42:49.907: INFO: Got endpoints: latency-svc-sph98 [2.739863333s]
Dec 24 11:42:50.059: INFO: Created: latency-svc-r7q9t
Dec 24 11:42:50.066: INFO: Got endpoints: latency-svc-r7q9t [2.550262282s]
Dec 24 11:42:50.200: INFO: Created: latency-svc-x6rwh
Dec 24 11:42:50.204: INFO: Got endpoints: latency-svc-x6rwh [2.460195959s]
Dec 24 11:42:50.271: INFO: Created: latency-svc-r2fbt
Dec 24 11:42:50.346: INFO: Got endpoints: latency-svc-r2fbt [2.431586091s]
Dec 24 11:42:50.396: INFO: Created: latency-svc-8f2tv
Dec 24 11:42:50.625: INFO: Got endpoints: latency-svc-8f2tv [2.667954711s]
Dec 24 11:42:50.650: INFO: Created: latency-svc-jxsqs
Dec 24 11:42:50.687: INFO: Got endpoints: latency-svc-jxsqs [2.574402616s]
Dec 24 11:42:50.815: INFO: Created: latency-svc-7s6x9
Dec 24 11:42:50.842: INFO: Got endpoints: latency-svc-7s6x9 [2.585901408s]
Dec 24 11:42:51.021: INFO: Created: latency-svc-w9557
Dec 24 11:42:51.080: INFO: Got endpoints: latency-svc-w9557 [2.771827088s]
Dec 24 11:42:51.144: INFO: Created: latency-svc-q25rc
Dec 24 11:42:51.274: INFO: Got endpoints: latency-svc-q25rc [2.564752518s]
Dec 24 11:42:51.304: INFO: Created: latency-svc-rt2jn
Dec 24 11:42:51.320: INFO: Got endpoints: latency-svc-rt2jn [2.5521566s]
Dec 24 11:42:51.565: INFO: Created: latency-svc-rp9rs
Dec 24 11:42:51.580: INFO: Got endpoints: latency-svc-rp9rs [2.454529419s]
Dec 24 11:42:51.750: INFO: Created: latency-svc-csvxt
Dec 24 11:42:51.754: INFO: Got endpoints: latency-svc-csvxt [2.355720586s]
Dec 24 11:42:51.967: INFO: Created: latency-svc-42fj7
Dec 24 11:42:51.984: INFO: Got endpoints: latency-svc-42fj7 [2.514252455s]
Dec 24 11:42:52.132: INFO: Created: latency-svc-gcrrx
Dec 24 11:42:52.175: INFO: Got endpoints: latency-svc-gcrrx [2.578591263s]
Dec 24 11:42:52.328: INFO: Created: latency-svc-7kzvq
Dec 24 11:42:52.374: INFO: Got endpoints: latency-svc-7kzvq [2.466481069s]
Dec 24 11:42:52.576: INFO: Created: latency-svc-ljxlw
Dec 24 11:42:52.576: INFO: Got endpoints: latency-svc-ljxlw [2.669539799s]
Dec 24 11:42:52.737: INFO: Created: latency-svc-6vmx9
Dec 24 11:42:52.763: INFO: Got endpoints: latency-svc-6vmx9 [2.696222639s]
Dec 24 11:42:52.876: INFO: Created: latency-svc-j8g29
Dec 24 11:42:52.912: INFO: Got endpoints: latency-svc-j8g29 [2.708143412s]
Dec 24 11:42:53.113: INFO: Created: latency-svc-8hmmm
Dec 24 11:42:53.170: INFO: Got endpoints: latency-svc-8hmmm [2.823105693s]
Dec 24 11:42:53.298: INFO: Created: latency-svc-rvk6x
Dec 24 11:42:53.306: INFO: Got endpoints: latency-svc-rvk6x [2.680901283s]
Dec 24 11:42:53.364: INFO: Created: latency-svc-jf6tc
Dec 24 11:42:53.491: INFO: Got endpoints: latency-svc-jf6tc [2.803350078s]
Dec 24 11:42:53.549: INFO: Created: latency-svc-8p27t
Dec 24 11:42:53.549: INFO: Got endpoints: latency-svc-8p27t [2.707303926s]
Dec 24 11:42:53.702: INFO: Created: latency-svc-bjrkx
Dec 24 11:42:53.731: INFO: Got endpoints: latency-svc-bjrkx [2.650946745s]
Dec 24 11:42:53.914: INFO: Created: latency-svc-m55v7
Dec 24 11:42:53.927: INFO: Got endpoints: latency-svc-m55v7 [2.65294174s]
Dec 24 11:42:54.093: INFO: Created: latency-svc-ppv2d
Dec 24 11:42:54.099: INFO: Got endpoints: latency-svc-ppv2d [2.779160564s]
Dec 24 11:42:54.156: INFO: Created: latency-svc-z5bg2
Dec 24 11:42:54.265: INFO: Got endpoints: latency-svc-z5bg2 [2.685323187s]
Dec 24 11:42:54.315: INFO: Created: latency-svc-86t6q
Dec 24 11:42:54.316: INFO: Got endpoints: latency-svc-86t6q [2.5617161s]
Dec 24 11:42:54.499: INFO: Created: latency-svc-zx7xp
Dec 24 11:42:54.503: INFO: Got endpoints: latency-svc-zx7xp [2.518528929s]
Dec 24 11:42:54.659: INFO: Created: latency-svc-x8sxr
Dec 24 11:42:54.669: INFO: Got endpoints: latency-svc-x8sxr [2.492547237s]
Dec 24 11:42:54.692: INFO: Created: latency-svc-9zx2z
Dec 24 11:42:54.692: INFO: Got endpoints: latency-svc-9zx2z [2.317722152s]
Dec 24 11:42:54.756: INFO: Created: latency-svc-twx8g
Dec 24 11:42:54.875: INFO: Got endpoints: latency-svc-twx8g [2.298672267s]
Dec 24 11:42:54.942: INFO: Created: latency-svc-xw2t6
Dec 24 11:42:55.117: INFO: Got endpoints: latency-svc-xw2t6 [2.354276984s]
Dec 24 11:42:55.144: INFO: Created: latency-svc-mxjqg
Dec 24 11:42:55.153: INFO: Got endpoints: latency-svc-mxjqg [2.240161868s]
Dec 24 11:42:55.196: INFO: Created: latency-svc-gn7rr
Dec 24 11:42:55.201: INFO: Got endpoints: latency-svc-gn7rr [2.030628093s]
Dec 24 11:42:55.323: INFO: Created: latency-svc-vxv48
Dec 24 11:42:55.338: INFO: Got endpoints: latency-svc-vxv48 [2.031699072s]
Dec 24 11:42:55.561: INFO: Created: latency-svc-7knpr
Dec 24 11:42:55.603: INFO: Created: latency-svc-hrpmb
Dec 24 11:42:55.614: INFO: Got endpoints: latency-svc-7knpr [2.122545265s]
Dec 24 11:42:55.620: INFO: Got endpoints: latency-svc-hrpmb [2.070607835s]
Dec 24 11:42:55.795: INFO: Created: latency-svc-blw6n
Dec 24 11:42:55.831: INFO: Got endpoints: latency-svc-blw6n [2.09891667s]
Dec 24 11:42:56.048: INFO: Created: latency-svc-qmm4f
Dec 24 11:42:56.069: INFO: Got endpoints: latency-svc-qmm4f [2.141672793s]
Dec 24 11:42:56.132: INFO: Created: latency-svc-mzzcj
Dec 24 11:42:56.224: INFO: Got endpoints: latency-svc-mzzcj [2.124522669s]
Dec 24 11:42:56.658: INFO: Created: latency-svc-sgnfb
Dec 24 11:42:56.871: INFO: Got endpoints: latency-svc-sgnfb [2.605186319s]
Dec 24 11:42:56.987: INFO: Created: latency-svc-lms8b
Dec 24 11:42:57.143: INFO: Got endpoints: latency-svc-lms8b [2.827398634s]
Dec 24 11:42:57.351: INFO: Created: latency-svc-zr88r
Dec 24 11:42:57.369: INFO: Got endpoints: latency-svc-zr88r [2.865881158s]
Dec 24 11:42:57.370: INFO: Latencies: [150.077286ms 167.812149ms 387.684986ms 405.694987ms 610.820245ms 1.044530143s 1.08233435s 1.283061794s 1.459963377s 1.492640347s 1.533106042s 1.744357365s 1.964901467s 2.030628093s 2.031699072s 2.070607835s 2.092076282s 2.093499496s 2.09891667s 2.122545265s 2.124522669s 2.141672793s 2.145501459s 2.179098264s 2.227691774s 2.22927279s 2.233928465s 2.240161868s 2.258732613s 2.267662616s 2.274765s 2.282118701s 2.288319403s 2.298672267s 2.317722152s 2.354276984s 2.355720586s 2.363079058s 2.366602256s 2.375714886s 2.380424286s 2.401077565s 2.40898725s 2.421021594s 2.423482592s 2.423907131s 2.427774525s 2.431586091s 2.435853898s 2.444486055s 2.454529419s 2.4577483s 2.460195959s 2.466481069s 2.476459881s 2.492547237s 2.514252455s 2.518528929s 2.520432126s 2.523703045s 2.525223308s 2.53014988s 2.541621502s 2.548262208s 2.550262282s 2.5521566s 2.552409566s 2.5617161s 2.562829582s 2.564752518s 2.573603404s 2.574402616s 2.578591263s 2.585901408s 2.602265104s 2.605186319s 2.607135702s 2.616461086s 2.617536353s 2.621359749s 2.623446693s 2.627999249s 2.632848143s 2.633084592s 2.648834409s 2.650946745s 2.651192726s 2.65294174s 2.658875088s 2.659098051s 2.666213301s 2.666627684s 2.667954711s 2.669539799s 2.67083098s 2.67288836s 2.680901283s 2.685323187s 2.696222639s 2.707303926s 2.708143412s 2.718275196s 2.726345561s 2.727191541s 2.739863333s 2.742194935s 2.752434322s 2.753303036s 2.771827088s 2.774797301s 2.778020833s 2.779160564s 2.785566334s 2.786762413s 2.795284681s 2.79941941s 2.803350078s 2.805223775s 2.823105693s 2.824847088s 2.827398634s 2.829916151s 2.833764406s 2.83985249s 2.844841156s 2.852399487s 2.854961822s 2.861860281s 2.865881158s 2.873696811s 2.891118491s 2.89725416s 2.904744233s 2.913657474s 2.923484539s 2.937100952s 2.938091503s 2.944281738s 2.954944472s 2.970230578s 2.973441881s 2.980049207s 2.980822243s 2.992854235s 2.999909783s 3.011093209s 3.018826323s 3.036980043s 3.038776505s 3.049681801s 3.051839567s 3.052165781s 3.057945336s 3.076433605s 3.20427545s 3.25444517s 3.277671236s 3.277783567s 3.321083358s 3.355264392s 3.384983503s 3.425939271s 3.429578354s 3.438490106s 3.506944177s 3.578901917s 3.608290668s 3.61938699s 3.632177746s 3.653836089s 3.657341113s 3.660941912s 3.672843022s 3.676285112s 3.678208967s 3.765331983s 3.772827792s 3.793136748s 3.794820127s 3.809129839s 3.824446096s 3.844261484s 3.86165448s 3.885724549s 3.891488278s 3.893324444s 3.901251631s 3.9280644s 3.936675815s 3.950236378s 3.956497674s 3.959372466s 3.967009797s 3.967162009s 4.021323645s 4.267476165s 4.442203342s 4.593388594s 4.632381362s 4.678644091s]
Dec 24 11:42:57.370: INFO: 50 %ile: 2.708143412s
Dec 24 11:42:57.370: INFO: 90 %ile: 3.824446096s
Dec 24 11:42:57.370: INFO: 99 %ile: 4.632381362s
Dec 24 11:42:57.370: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:42:57.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-6gf52" for this suite.
Dec 24 11:43:53.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:43:53.507: INFO: namespace: e2e-tests-svc-latency-6gf52, resource: bindings, ignored listing per whitelist
Dec 24 11:43:53.584: INFO: namespace e2e-tests-svc-latency-6gf52 deletion completed in 56.19927905s

• [SLOW TEST:105.555 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:43:53.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 24 11:43:53.924: INFO: Waiting up to 5m0s for pod "pod-a952e177-2642-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-p85gr" to be "success or failure"
Dec 24 11:43:54.034: INFO: Pod "pod-a952e177-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 109.758677ms
Dec 24 11:43:56.176: INFO: Pod "pod-a952e177-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251894042s
Dec 24 11:43:58.192: INFO: Pod "pod-a952e177-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268400075s
Dec 24 11:44:00.277: INFO: Pod "pod-a952e177-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353362476s
Dec 24 11:44:02.310: INFO: Pod "pod-a952e177-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.386062841s
Dec 24 11:44:04.339: INFO: Pod "pod-a952e177-2642-11ea-b7c4-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.415287357s
Dec 24 11:44:06.371: INFO: Pod "pod-a952e177-2642-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.447384935s
STEP: Saw pod success
Dec 24 11:44:06.371: INFO: Pod "pod-a952e177-2642-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:44:06.381: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a952e177-2642-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 11:44:06.447: INFO: Waiting for pod pod-a952e177-2642-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:44:06.451: INFO: Pod pod-a952e177-2642-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:44:06.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-p85gr" for this suite.
Dec 24 11:44:12.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:44:12.912: INFO: namespace: e2e-tests-emptydir-p85gr, resource: bindings, ignored listing per whitelist
Dec 24 11:44:12.914: INFO: namespace e2e-tests-emptydir-p85gr deletion completed in 6.445466255s

• [SLOW TEST:19.329 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:44:12.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 24 11:44:13.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:13.620: INFO: stderr: ""
Dec 24 11:44:13.620: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 24 11:44:13.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:13.838: INFO: stderr: ""
Dec 24 11:44:13.838: INFO: stdout: "update-demo-nautilus-7rxzh update-demo-nautilus-jcxv4 "
Dec 24 11:44:13.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rxzh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:14.114: INFO: stderr: ""
Dec 24 11:44:14.114: INFO: stdout: ""
Dec 24 11:44:14.114: INFO: update-demo-nautilus-7rxzh is created but not running
Dec 24 11:44:19.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:19.291: INFO: stderr: ""
Dec 24 11:44:19.291: INFO: stdout: "update-demo-nautilus-7rxzh update-demo-nautilus-jcxv4 "
Dec 24 11:44:19.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rxzh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:19.444: INFO: stderr: ""
Dec 24 11:44:19.444: INFO: stdout: ""
Dec 24 11:44:19.444: INFO: update-demo-nautilus-7rxzh is created but not running
Dec 24 11:44:24.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:24.679: INFO: stderr: ""
Dec 24 11:44:24.680: INFO: stdout: "update-demo-nautilus-7rxzh update-demo-nautilus-jcxv4 "
Dec 24 11:44:24.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rxzh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:24.845: INFO: stderr: ""
Dec 24 11:44:24.845: INFO: stdout: ""
Dec 24 11:44:24.845: INFO: update-demo-nautilus-7rxzh is created but not running
Dec 24 11:44:29.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:30.029: INFO: stderr: ""
Dec 24 11:44:30.029: INFO: stdout: "update-demo-nautilus-7rxzh update-demo-nautilus-jcxv4 "
Dec 24 11:44:30.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rxzh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:30.194: INFO: stderr: ""
Dec 24 11:44:30.194: INFO: stdout: "true"
Dec 24 11:44:30.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rxzh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:30.335: INFO: stderr: ""
Dec 24 11:44:30.335: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 11:44:30.335: INFO: validating pod update-demo-nautilus-7rxzh
Dec 24 11:44:30.385: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 11:44:30.385: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 11:44:30.385: INFO: update-demo-nautilus-7rxzh is verified up and running
Dec 24 11:44:30.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcxv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:30.597: INFO: stderr: ""
Dec 24 11:44:30.597: INFO: stdout: "true"
Dec 24 11:44:30.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcxv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:30.724: INFO: stderr: ""
Dec 24 11:44:30.724: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 11:44:30.724: INFO: validating pod update-demo-nautilus-jcxv4
Dec 24 11:44:30.738: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 11:44:30.738: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 11:44:30.738: INFO: update-demo-nautilus-jcxv4 is verified up and running
STEP: using delete to clean up resources
Dec 24 11:44:30.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:30.911: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 11:44:30.912: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 24 11:44:30.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-w2b54'
Dec 24 11:44:31.266: INFO: stderr: "No resources found.\n"
Dec 24 11:44:31.266: INFO: stdout: ""
Dec 24 11:44:31.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-w2b54 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 24 11:44:31.382: INFO: stderr: ""
Dec 24 11:44:31.383: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:44:31.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w2b54" for this suite.
Dec 24 11:44:55.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:44:55.586: INFO: namespace: e2e-tests-kubectl-w2b54, resource: bindings, ignored listing per whitelist
Dec 24 11:44:55.586: INFO: namespace e2e-tests-kubectl-w2b54 deletion completed in 24.189661439s

• [SLOW TEST:42.672 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:44:55.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-ce474772-2642-11ea-b7c4-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:45:08.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-m9bkg" for this suite.
Dec 24 11:45:32.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:45:32.248: INFO: namespace: e2e-tests-configmap-m9bkg, resource: bindings, ignored listing per whitelist
Dec 24 11:45:32.299: INFO: namespace e2e-tests-configmap-m9bkg deletion completed in 24.202476637s

• [SLOW TEST:36.712 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:45:32.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-e42f4148-2642-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 11:45:32.648: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e430cc04-2642-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-n2pct" to be "success or failure"
Dec 24 11:45:32.656: INFO: Pod "pod-projected-configmaps-e430cc04-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.253081ms
Dec 24 11:45:34.961: INFO: Pod "pod-projected-configmaps-e430cc04-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31265214s
Dec 24 11:45:36.972: INFO: Pod "pod-projected-configmaps-e430cc04-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323692577s
Dec 24 11:45:38.985: INFO: Pod "pod-projected-configmaps-e430cc04-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.337110763s
Dec 24 11:45:41.380: INFO: Pod "pod-projected-configmaps-e430cc04-2642-11ea-b7c4-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.732438207s
Dec 24 11:45:43.406: INFO: Pod "pod-projected-configmaps-e430cc04-2642-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.758317899s
STEP: Saw pod success
Dec 24 11:45:43.406: INFO: Pod "pod-projected-configmaps-e430cc04-2642-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:45:43.431: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e430cc04-2642-11ea-b7c4-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 24 11:45:43.607: INFO: Waiting for pod pod-projected-configmaps-e430cc04-2642-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:45:43.614: INFO: Pod pod-projected-configmaps-e430cc04-2642-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:45:43.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n2pct" for this suite.
Dec 24 11:45:49.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:45:49.817: INFO: namespace: e2e-tests-projected-n2pct, resource: bindings, ignored listing per whitelist
Dec 24 11:45:49.943: INFO: namespace e2e-tests-projected-n2pct deletion completed in 6.321086181s

• [SLOW TEST:17.643 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:45:49.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1224 11:45:53.264740       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 11:45:53.264: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:45:53.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-pdvmh" for this suite.
Dec 24 11:45:59.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:46:00.187: INFO: namespace: e2e-tests-gc-pdvmh, resource: bindings, ignored listing per whitelist
Dec 24 11:46:00.256: INFO: namespace e2e-tests-gc-pdvmh deletion completed in 6.970861339s

• [SLOW TEST:10.313 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:46:00.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 24 11:46:00.424: INFO: Waiting up to 5m0s for pod "client-containers-f4be8d90-2642-11ea-b7c4-0242ac110005" in namespace "e2e-tests-containers-s9cdd" to be "success or failure"
Dec 24 11:46:00.432: INFO: Pod "client-containers-f4be8d90-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.937543ms
Dec 24 11:46:02.802: INFO: Pod "client-containers-f4be8d90-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37763327s
Dec 24 11:46:04.827: INFO: Pod "client-containers-f4be8d90-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402072103s
Dec 24 11:46:06.868: INFO: Pod "client-containers-f4be8d90-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443211081s
Dec 24 11:46:08.901: INFO: Pod "client-containers-f4be8d90-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.476884147s
Dec 24 11:46:10.921: INFO: Pod "client-containers-f4be8d90-2642-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.496548116s
STEP: Saw pod success
Dec 24 11:46:10.921: INFO: Pod "client-containers-f4be8d90-2642-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:46:10.934: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-f4be8d90-2642-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 11:46:11.256: INFO: Waiting for pod client-containers-f4be8d90-2642-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:46:11.283: INFO: Pod client-containers-f4be8d90-2642-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:46:11.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-s9cdd" for this suite.
Dec 24 11:46:17.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:46:17.599: INFO: namespace: e2e-tests-containers-s9cdd, resource: bindings, ignored listing per whitelist
Dec 24 11:46:17.735: INFO: namespace e2e-tests-containers-s9cdd deletion completed in 6.407698007s

• [SLOW TEST:17.479 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:46:17.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-ff31e67f-2642-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 11:46:17.967: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ff32ad77-2642-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-mxwsp" to be "success or failure"
Dec 24 11:46:18.050: INFO: Pod "pod-projected-secrets-ff32ad77-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 83.52675ms
Dec 24 11:46:20.070: INFO: Pod "pod-projected-secrets-ff32ad77-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103834096s
Dec 24 11:46:22.100: INFO: Pod "pod-projected-secrets-ff32ad77-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132944448s
Dec 24 11:46:24.119: INFO: Pod "pod-projected-secrets-ff32ad77-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152035623s
Dec 24 11:46:26.173: INFO: Pod "pod-projected-secrets-ff32ad77-2642-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205854518s
Dec 24 11:46:28.201: INFO: Pod "pod-projected-secrets-ff32ad77-2642-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.234123417s
STEP: Saw pod success
Dec 24 11:46:28.201: INFO: Pod "pod-projected-secrets-ff32ad77-2642-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:46:28.217: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ff32ad77-2642-11ea-b7c4-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 24 11:46:28.723: INFO: Waiting for pod pod-projected-secrets-ff32ad77-2642-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:46:28.730: INFO: Pod pod-projected-secrets-ff32ad77-2642-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:46:28.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mxwsp" for this suite.
Dec 24 11:46:34.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:46:34.945: INFO: namespace: e2e-tests-projected-mxwsp, resource: bindings, ignored listing per whitelist
Dec 24 11:46:35.012: INFO: namespace e2e-tests-projected-mxwsp deletion completed in 6.275241049s

• [SLOW TEST:17.276 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:46:35.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rqrq8
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 24 11:46:35.283: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 24 11:47:09.600: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-rqrq8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 11:47:09.600: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 11:47:11.191: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:47:11.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-rqrq8" for this suite.
Dec 24 11:47:35.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:47:35.365: INFO: namespace: e2e-tests-pod-network-test-rqrq8, resource: bindings, ignored listing per whitelist
Dec 24 11:47:35.444: INFO: namespace e2e-tests-pod-network-test-rqrq8 deletion completed in 24.231675264s

• [SLOW TEST:60.431 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:47:35.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:48:35.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gpn8g" for this suite.
Dec 24 11:48:59.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:48:59.974: INFO: namespace: e2e-tests-container-probe-gpn8g, resource: bindings, ignored listing per whitelist
Dec 24 11:49:00.027: INFO: namespace e2e-tests-container-probe-gpn8g deletion completed in 24.291494039s

• [SLOW TEST:84.583 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:49:00.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 11:49:00.231: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5feb77a0-2643-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-mrcb4" to be "success or failure"
Dec 24 11:49:00.244: INFO: Pod "downwardapi-volume-5feb77a0-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.729864ms
Dec 24 11:49:02.258: INFO: Pod "downwardapi-volume-5feb77a0-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026262116s
Dec 24 11:49:04.278: INFO: Pod "downwardapi-volume-5feb77a0-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046764094s
Dec 24 11:49:06.679: INFO: Pod "downwardapi-volume-5feb77a0-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.447353795s
Dec 24 11:49:08.704: INFO: Pod "downwardapi-volume-5feb77a0-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.472423258s
Dec 24 11:49:10.753: INFO: Pod "downwardapi-volume-5feb77a0-2643-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.522030099s
STEP: Saw pod success
Dec 24 11:49:10.754: INFO: Pod "downwardapi-volume-5feb77a0-2643-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:49:10.908: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5feb77a0-2643-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 11:49:11.137: INFO: Waiting for pod downwardapi-volume-5feb77a0-2643-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:49:11.156: INFO: Pod downwardapi-volume-5feb77a0-2643-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:49:11.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mrcb4" for this suite.
Dec 24 11:49:17.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:49:17.333: INFO: namespace: e2e-tests-downward-api-mrcb4, resource: bindings, ignored listing per whitelist
Dec 24 11:49:17.461: INFO: namespace e2e-tests-downward-api-mrcb4 deletion completed in 6.295410433s

• [SLOW TEST:17.434 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:49:17.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 11:49:17.662: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a489f64-2643-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-c7wqm" to be "success or failure"
Dec 24 11:49:17.679: INFO: Pod "downwardapi-volume-6a489f64-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.346635ms
Dec 24 11:49:19.698: INFO: Pod "downwardapi-volume-6a489f64-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035326791s
Dec 24 11:49:21.714: INFO: Pod "downwardapi-volume-6a489f64-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052234103s
Dec 24 11:49:23.731: INFO: Pod "downwardapi-volume-6a489f64-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068699466s
Dec 24 11:49:25.773: INFO: Pod "downwardapi-volume-6a489f64-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11110677s
Dec 24 11:49:27.785: INFO: Pod "downwardapi-volume-6a489f64-2643-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122346568s
STEP: Saw pod success
Dec 24 11:49:27.785: INFO: Pod "downwardapi-volume-6a489f64-2643-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:49:27.788: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6a489f64-2643-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 11:49:28.430: INFO: Waiting for pod downwardapi-volume-6a489f64-2643-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:49:28.742: INFO: Pod downwardapi-volume-6a489f64-2643-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:49:28.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c7wqm" for this suite.
Dec 24 11:49:35.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:49:35.179: INFO: namespace: e2e-tests-projected-c7wqm, resource: bindings, ignored listing per whitelist
Dec 24 11:49:35.267: INFO: namespace e2e-tests-projected-c7wqm deletion completed in 6.244739266s

• [SLOW TEST:17.806 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:49:35.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1224 11:49:45.585326       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 11:49:45.585: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:49:45.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-67zsb" for this suite.
Dec 24 11:49:51.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:49:51.864: INFO: namespace: e2e-tests-gc-67zsb, resource: bindings, ignored listing per whitelist
Dec 24 11:49:51.971: INFO: namespace e2e-tests-gc-67zsb deletion completed in 6.378089288s

• [SLOW TEST:16.704 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:49:51.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-7ee5a166-2643-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 11:49:52.212: INFO: Waiting up to 5m0s for pod "pod-secrets-7ee76325-2643-11ea-b7c4-0242ac110005" in namespace "e2e-tests-secrets-dsxlh" to be "success or failure"
Dec 24 11:49:52.216: INFO: Pod "pod-secrets-7ee76325-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.737363ms
Dec 24 11:49:54.231: INFO: Pod "pod-secrets-7ee76325-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019164686s
Dec 24 11:49:56.254: INFO: Pod "pod-secrets-7ee76325-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041729108s
Dec 24 11:49:58.281: INFO: Pod "pod-secrets-7ee76325-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069008981s
Dec 24 11:50:00.295: INFO: Pod "pod-secrets-7ee76325-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082979279s
Dec 24 11:50:02.310: INFO: Pod "pod-secrets-7ee76325-2643-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09752162s
STEP: Saw pod success
Dec 24 11:50:02.310: INFO: Pod "pod-secrets-7ee76325-2643-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:50:02.314: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7ee76325-2643-11ea-b7c4-0242ac110005 container secret-env-test: 
STEP: delete the pod
Dec 24 11:50:02.495: INFO: Waiting for pod pod-secrets-7ee76325-2643-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:50:02.509: INFO: Pod pod-secrets-7ee76325-2643-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:50:02.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dsxlh" for this suite.
Dec 24 11:50:09.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:50:09.369: INFO: namespace: e2e-tests-secrets-dsxlh, resource: bindings, ignored listing per whitelist
Dec 24 11:50:09.637: INFO: namespace e2e-tests-secrets-dsxlh deletion completed in 7.112600031s

• [SLOW TEST:17.665 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:50:09.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 24 11:50:09.811: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 24 11:50:09.832: INFO: Waiting for terminating namespaces to be deleted...
Dec 24 11:50:09.838: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 24 11:50:09.868: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 24 11:50:09.868: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 24 11:50:09.868: INFO: 	Container weave ready: true, restart count 0
Dec 24 11:50:09.868: INFO: 	Container weave-npc ready: true, restart count 0
Dec 24 11:50:09.868: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 24 11:50:09.868: INFO: 	Container coredns ready: true, restart count 0
Dec 24 11:50:09.868: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 24 11:50:09.868: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 24 11:50:09.868: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 24 11:50:09.869: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 24 11:50:09.869: INFO: 	Container coredns ready: true, restart count 0
Dec 24 11:50:09.869: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 24 11:50:09.869: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e34c521b41465c], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:50:11.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-j78bb" for this suite.
Dec 24 11:50:17.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:50:17.182: INFO: namespace: e2e-tests-sched-pred-j78bb, resource: bindings, ignored listing per whitelist
Dec 24 11:50:17.248: INFO: namespace e2e-tests-sched-pred-j78bb deletion completed in 6.196018359s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.611 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:50:17.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 11:50:17.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-wbth4'
Dec 24 11:50:19.420: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 24 11:50:19.420: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Dec 24 11:50:21.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-wbth4'
Dec 24 11:50:22.351: INFO: stderr: ""
Dec 24 11:50:22.351: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:50:22.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wbth4" for this suite.
Dec 24 11:50:28.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:50:28.817: INFO: namespace: e2e-tests-kubectl-wbth4, resource: bindings, ignored listing per whitelist
Dec 24 11:50:28.853: INFO: namespace e2e-tests-kubectl-wbth4 deletion completed in 6.258647407s

• [SLOW TEST:11.605 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:50:28.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 24 11:50:39.936: INFO: Successfully updated pod "annotationupdate94f5adba-2643-11ea-b7c4-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:50:42.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ckxng" for this suite.
Dec 24 11:51:04.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:51:04.267: INFO: namespace: e2e-tests-projected-ckxng, resource: bindings, ignored listing per whitelist
Dec 24 11:51:04.295: INFO: namespace e2e-tests-projected-ckxng deletion completed in 22.210342184s

• [SLOW TEST:35.441 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:51:04.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 24 11:51:15.045: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a9f09ef6-2643-11ea-b7c4-0242ac110005"
Dec 24 11:51:15.045: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a9f09ef6-2643-11ea-b7c4-0242ac110005" in namespace "e2e-tests-pods-klpf7" to be "terminated due to deadline exceeded"
Dec 24 11:51:15.051: INFO: Pod "pod-update-activedeadlineseconds-a9f09ef6-2643-11ea-b7c4-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 6.091375ms
Dec 24 11:51:17.100: INFO: Pod "pod-update-activedeadlineseconds-a9f09ef6-2643-11ea-b7c4-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.054817172s
Dec 24 11:51:17.100: INFO: Pod "pod-update-activedeadlineseconds-a9f09ef6-2643-11ea-b7c4-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:51:17.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-klpf7" for this suite.
Dec 24 11:51:23.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:51:23.304: INFO: namespace: e2e-tests-pods-klpf7, resource: bindings, ignored listing per whitelist
Dec 24 11:51:23.369: INFO: namespace e2e-tests-pods-klpf7 deletion completed in 6.257905601s

• [SLOW TEST:19.073 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:51:23.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Dec 24 11:51:23.575: INFO: Waiting up to 5m0s for pod "var-expansion-b55d2608-2643-11ea-b7c4-0242ac110005" in namespace "e2e-tests-var-expansion-msrrp" to be "success or failure"
Dec 24 11:51:23.599: INFO: Pod "var-expansion-b55d2608-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.447806ms
Dec 24 11:51:25.731: INFO: Pod "var-expansion-b55d2608-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155681669s
Dec 24 11:51:27.743: INFO: Pod "var-expansion-b55d2608-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167954548s
Dec 24 11:51:29.815: INFO: Pod "var-expansion-b55d2608-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239695762s
Dec 24 11:51:31.841: INFO: Pod "var-expansion-b55d2608-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.266011723s
Dec 24 11:51:33.857: INFO: Pod "var-expansion-b55d2608-2643-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.281093035s
STEP: Saw pod success
Dec 24 11:51:33.857: INFO: Pod "var-expansion-b55d2608-2643-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:51:33.861: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-b55d2608-2643-11ea-b7c4-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 24 11:51:34.409: INFO: Waiting for pod var-expansion-b55d2608-2643-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:51:34.807: INFO: Pod var-expansion-b55d2608-2643-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:51:34.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-msrrp" for this suite.
Dec 24 11:51:40.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:51:41.109: INFO: namespace: e2e-tests-var-expansion-msrrp, resource: bindings, ignored listing per whitelist
Dec 24 11:51:41.111: INFO: namespace e2e-tests-var-expansion-msrrp deletion completed in 6.277476073s

• [SLOW TEST:17.742 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:51:41.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 24 11:51:41.269: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 24 11:51:41.280: INFO: Waiting for terminating namespaces to be deleted...
Dec 24 11:51:41.285: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 24 11:51:41.301: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 24 11:51:41.301: INFO: 	Container weave ready: true, restart count 0
Dec 24 11:51:41.301: INFO: 	Container weave-npc ready: true, restart count 0
Dec 24 11:51:41.301: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 24 11:51:41.301: INFO: 	Container coredns ready: true, restart count 0
Dec 24 11:51:41.301: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 24 11:51:41.301: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 24 11:51:41.301: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 24 11:51:41.301: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 24 11:51:41.301: INFO: 	Container coredns ready: true, restart count 0
Dec 24 11:51:41.301: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 24 11:51:41.301: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 24 11:51:41.301: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 24 11:51:41.385: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 24 11:51:41.385: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 24 11:51:41.385: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 24 11:51:41.385: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 24 11:51:41.385: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 24 11:51:41.385: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 24 11:51:41.385: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 24 11:51:41.385: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bffc158a-2643-11ea-b7c4-0242ac110005.15e34c676270a04a], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-snmsp/filler-pod-bffc158a-2643-11ea-b7c4-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bffc158a-2643-11ea-b7c4-0242ac110005.15e34c688caa6f93], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bffc158a-2643-11ea-b7c4-0242ac110005.15e34c69306caf6a], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bffc158a-2643-11ea-b7c4-0242ac110005.15e34c695722d0bf], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e34c69b91a07f3], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:51:52.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-snmsp" for this suite.
Dec 24 11:52:00.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:52:00.964: INFO: namespace: e2e-tests-sched-pred-snmsp, resource: bindings, ignored listing per whitelist
Dec 24 11:52:01.031: INFO: namespace e2e-tests-sched-pred-snmsp deletion completed in 8.290814432s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:19.920 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:52:01.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 24 11:52:01.401: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 24 11:52:01.421: INFO: Waiting for terminating namespaces to be deleted...
Dec 24 11:52:01.427: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 24 11:52:01.450: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 24 11:52:01.450: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 24 11:52:01.450: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 24 11:52:01.450: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 24 11:52:01.450: INFO: 	Container weave ready: true, restart count 0
Dec 24 11:52:01.450: INFO: 	Container weave-npc ready: true, restart count 0
Dec 24 11:52:01.450: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 24 11:52:01.450: INFO: 	Container coredns ready: true, restart count 0
Dec 24 11:52:01.450: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 24 11:52:01.450: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 24 11:52:01.450: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 24 11:52:01.450: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 24 11:52:01.450: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-d2092b4c-2643-11ea-b7c4-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-d2092b4c-2643-11ea-b7c4-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-d2092b4c-2643-11ea-b7c4-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:52:22.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-mqrln" for this suite.
Dec 24 11:52:34.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:52:34.190: INFO: namespace: e2e-tests-sched-pred-mqrln, resource: bindings, ignored listing per whitelist
Dec 24 11:52:34.263: INFO: namespace e2e-tests-sched-pred-mqrln deletion completed in 12.207293679s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:33.231 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:52:34.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Dec 24 11:52:34.568: INFO: Waiting up to 5m0s for pod "var-expansion-dfa86cc8-2643-11ea-b7c4-0242ac110005" in namespace "e2e-tests-var-expansion-djvpj" to be "success or failure"
Dec 24 11:52:34.651: INFO: Pod "var-expansion-dfa86cc8-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 82.845218ms
Dec 24 11:52:36.666: INFO: Pod "var-expansion-dfa86cc8-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09794486s
Dec 24 11:52:38.912: INFO: Pod "var-expansion-dfa86cc8-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344240008s
Dec 24 11:52:40.938: INFO: Pod "var-expansion-dfa86cc8-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.370118972s
Dec 24 11:52:42.983: INFO: Pod "var-expansion-dfa86cc8-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.414849115s
Dec 24 11:52:45.151: INFO: Pod "var-expansion-dfa86cc8-2643-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.582904096s
STEP: Saw pod success
Dec 24 11:52:45.151: INFO: Pod "var-expansion-dfa86cc8-2643-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:52:45.163: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-dfa86cc8-2643-11ea-b7c4-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 24 11:52:45.448: INFO: Waiting for pod var-expansion-dfa86cc8-2643-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:52:45.458: INFO: Pod var-expansion-dfa86cc8-2643-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:52:45.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-djvpj" for this suite.
Dec 24 11:52:53.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:52:53.897: INFO: namespace: e2e-tests-var-expansion-djvpj, resource: bindings, ignored listing per whitelist
Dec 24 11:52:53.918: INFO: namespace e2e-tests-var-expansion-djvpj deletion completed in 8.451621911s

• [SLOW TEST:19.655 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:52:53.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-eb637699-2643-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 11:52:54.286: INFO: Waiting up to 5m0s for pod "pod-secrets-eb6d7285-2643-11ea-b7c4-0242ac110005" in namespace "e2e-tests-secrets-kpbdr" to be "success or failure"
Dec 24 11:52:54.330: INFO: Pod "pod-secrets-eb6d7285-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.067396ms
Dec 24 11:52:56.450: INFO: Pod "pod-secrets-eb6d7285-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163216946s
Dec 24 11:52:58.471: INFO: Pod "pod-secrets-eb6d7285-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18447688s
Dec 24 11:53:00.498: INFO: Pod "pod-secrets-eb6d7285-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211370549s
Dec 24 11:53:02.542: INFO: Pod "pod-secrets-eb6d7285-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255865899s
Dec 24 11:53:04.574: INFO: Pod "pod-secrets-eb6d7285-2643-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.287572212s
STEP: Saw pod success
Dec 24 11:53:04.574: INFO: Pod "pod-secrets-eb6d7285-2643-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:53:04.592: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-eb6d7285-2643-11ea-b7c4-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 24 11:53:04.710: INFO: Waiting for pod pod-secrets-eb6d7285-2643-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:53:04.763: INFO: Pod pod-secrets-eb6d7285-2643-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:53:04.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kpbdr" for this suite.
Dec 24 11:53:10.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:53:11.058: INFO: namespace: e2e-tests-secrets-kpbdr, resource: bindings, ignored listing per whitelist
Dec 24 11:53:11.074: INFO: namespace e2e-tests-secrets-kpbdr deletion completed in 6.189203067s

• [SLOW TEST:17.153 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:53:11.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:53:11.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-gb46k" for this suite.
Dec 24 11:53:17.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:53:17.460: INFO: namespace: e2e-tests-services-gb46k, resource: bindings, ignored listing per whitelist
Dec 24 11:53:17.533: INFO: namespace e2e-tests-services-gb46k deletion completed in 6.208819291s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.458 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:53:17.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-f9714f42-2643-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 11:53:17.923: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f9743193-2643-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-jdj7m" to be "success or failure"
Dec 24 11:53:17.965: INFO: Pod "pod-projected-secrets-f9743193-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.513191ms
Dec 24 11:53:19.998: INFO: Pod "pod-projected-secrets-f9743193-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075531595s
Dec 24 11:53:22.012: INFO: Pod "pod-projected-secrets-f9743193-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089274496s
Dec 24 11:53:24.029: INFO: Pod "pod-projected-secrets-f9743193-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106206732s
Dec 24 11:53:26.045: INFO: Pod "pod-projected-secrets-f9743193-2643-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121852255s
Dec 24 11:53:28.072: INFO: Pod "pod-projected-secrets-f9743193-2643-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148708027s
STEP: Saw pod success
Dec 24 11:53:28.072: INFO: Pod "pod-projected-secrets-f9743193-2643-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:53:28.078: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-f9743193-2643-11ea-b7c4-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 24 11:53:28.144: INFO: Waiting for pod pod-projected-secrets-f9743193-2643-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:53:28.153: INFO: Pod pod-projected-secrets-f9743193-2643-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:53:28.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jdj7m" for this suite.
Dec 24 11:53:34.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:53:34.309: INFO: namespace: e2e-tests-projected-jdj7m, resource: bindings, ignored listing per whitelist
Dec 24 11:53:34.311: INFO: namespace e2e-tests-projected-jdj7m deletion completed in 6.150537355s

• [SLOW TEST:16.778 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:53:34.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 11:53:34.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-zrrpd" to be "success or failure"
Dec 24 11:53:34.632: INFO: Pod "downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.099769ms
Dec 24 11:53:36.667: INFO: Pod "downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050815859s
Dec 24 11:53:38.681: INFO: Pod "downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065066351s
Dec 24 11:53:40.691: INFO: Pod "downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075003662s
Dec 24 11:53:42.712: INFO: Pod "downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096307603s
Dec 24 11:53:44.758: INFO: Pod "downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.141930606s
Dec 24 11:53:46.938: INFO: Pod "downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.321611966s
STEP: Saw pod success
Dec 24 11:53:46.938: INFO: Pod "downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:53:46.952: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 11:53:47.259: INFO: Waiting for pod downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:53:47.331: INFO: Pod downwardapi-volume-0376fe09-2644-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:53:47.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zrrpd" for this suite.
Dec 24 11:53:53.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:53:53.594: INFO: namespace: e2e-tests-downward-api-zrrpd, resource: bindings, ignored listing per whitelist
Dec 24 11:53:53.594: INFO: namespace e2e-tests-downward-api-zrrpd deletion completed in 6.242752076s

• [SLOW TEST:19.282 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:53:53.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 24 11:56:58.070: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:56:58.196: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:00.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:00.217: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:02.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:02.218: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:04.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:04.221: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:06.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:06.217: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:08.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:08.213: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:10.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:10.212: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:12.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:12.218: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:14.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:14.222: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:16.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:16.213: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:18.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:18.211: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:20.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:20.216: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:22.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:22.212: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:24.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:24.210: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:26.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:26.249: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:28.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:28.212: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:30.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:30.215: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:32.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:32.216: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:34.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:34.211: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:36.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:36.268: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:38.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:38.216: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:40.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:40.211: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:42.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:42.211: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:44.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:44.221: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:46.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:46.212: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:48.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:48.212: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:50.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:50.219: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:52.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:52.224: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:54.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:54.253: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:56.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:56.209: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:57:58.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:57:58.214: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:00.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:00.210: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:02.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:02.253: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:04.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:04.214: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:06.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:06.213: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:08.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:08.218: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:10.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:10.217: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:12.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:12.219: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:14.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:14.220: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:16.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:16.217: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:18.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:18.215: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:20.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:20.217: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:22.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:22.210: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:24.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:24.212: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:26.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:26.213: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:28.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:28.228: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:30.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:30.212: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:32.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:32.211: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:34.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:34.212: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:36.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:36.228: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:38.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:38.216: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:40.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:40.221: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:42.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:42.211: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:44.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:44.213: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:46.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:46.211: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 11:58:48.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 11:58:48.230: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:58:48.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7j4lc" for this suite.
Dec 24 11:59:12.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:59:12.789: INFO: namespace: e2e-tests-container-lifecycle-hook-7j4lc, resource: bindings, ignored listing per whitelist
Dec 24 11:59:12.795: INFO: namespace e2e-tests-container-lifecycle-hook-7j4lc deletion completed in 24.552871563s

• [SLOW TEST:319.200 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:59:12.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-cd2dcb16-2644-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 11:59:13.101: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-92l9p" to be "success or failure"
Dec 24 11:59:13.115: INFO: Pod "pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.954261ms
Dec 24 11:59:15.198: INFO: Pod "pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097197974s
Dec 24 11:59:17.215: INFO: Pod "pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113802798s
Dec 24 11:59:19.394: INFO: Pod "pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.292406419s
Dec 24 11:59:21.406: INFO: Pod "pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.304784123s
Dec 24 11:59:23.446: INFO: Pod "pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.344920694s
Dec 24 11:59:25.558: INFO: Pod "pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.456435913s
STEP: Saw pod success
Dec 24 11:59:25.558: INFO: Pod "pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 11:59:25.569: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 24 11:59:25.710: INFO: Waiting for pod pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005 to disappear
Dec 24 11:59:25.733: INFO: Pod pod-projected-secrets-cd36889c-2644-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:59:25.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-92l9p" for this suite.
Dec 24 11:59:31.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 11:59:31.921: INFO: namespace: e2e-tests-projected-92l9p, resource: bindings, ignored listing per whitelist
Dec 24 11:59:31.980: INFO: namespace e2e-tests-projected-92l9p deletion completed in 6.232963901s

• [SLOW TEST:19.185 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 11:59:31.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 24 11:59:42.505: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d8bc0402-2644-11ea-b7c4-0242ac110005,GenerateName:,Namespace:e2e-tests-events-7wv4l,SelfLink:/api/v1/namespaces/e2e-tests-events-7wv4l/pods/send-events-d8bc0402-2644-11ea-b7c4-0242ac110005,UID:d8bd92b5-2644-11ea-a994-fa163e34d433,ResourceVersion:15900055,Generation:0,CreationTimestamp:2019-12-24 11:59:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 405359589,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zk55l {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zk55l,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-zk55l true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022a5090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022a50b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:59:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:59:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:59:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 11:59:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-24 11:59:32 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-24 11:59:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://71828402adad7eeeb8202259954226b44733a6f5b4a0a72f53de3138c52c7919}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 24 11:59:44.560: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 24 11:59:46.595: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 11:59:46.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-7wv4l" for this suite.
Dec 24 12:00:34.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:00:35.001: INFO: namespace: e2e-tests-events-7wv4l, resource: bindings, ignored listing per whitelist
Dec 24 12:00:35.007: INFO: namespace e2e-tests-events-7wv4l deletion completed in 48.331768562s

• [SLOW TEST:63.027 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:00:35.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 12:00:35.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe29843d-2644-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-kgft6" to be "success or failure"
Dec 24 12:00:35.220: INFO: Pod "downwardapi-volume-fe29843d-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.931667ms
Dec 24 12:00:37.990: INFO: Pod "downwardapi-volume-fe29843d-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.7776482s
Dec 24 12:00:40.007: INFO: Pod "downwardapi-volume-fe29843d-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.794952052s
Dec 24 12:00:42.327: INFO: Pod "downwardapi-volume-fe29843d-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.114823689s
Dec 24 12:00:44.337: INFO: Pod "downwardapi-volume-fe29843d-2644-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.125256473s
Dec 24 12:00:46.614: INFO: Pod "downwardapi-volume-fe29843d-2644-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.401469769s
STEP: Saw pod success
Dec 24 12:00:46.614: INFO: Pod "downwardapi-volume-fe29843d-2644-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:00:46.623: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fe29843d-2644-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 12:00:46.890: INFO: Waiting for pod downwardapi-volume-fe29843d-2644-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:00:46.912: INFO: Pod downwardapi-volume-fe29843d-2644-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:00:46.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kgft6" for this suite.
Dec 24 12:00:52.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:00:53.087: INFO: namespace: e2e-tests-projected-kgft6, resource: bindings, ignored listing per whitelist
Dec 24 12:00:53.134: INFO: namespace e2e-tests-projected-kgft6 deletion completed in 6.211570915s

• [SLOW TEST:18.127 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:00:53.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 12:00:53.335: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 24 12:00:58.388: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 24 12:01:04.419: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 24 12:01:04.582: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-kpsrx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kpsrx/deployments/test-cleanup-deployment,UID:0f98d2b1-2645-11ea-a994-fa163e34d433,ResourceVersion:15900203,Generation:1,CreationTimestamp:2019-12-24 12:01:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 24 12:01:04.616: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:01:04.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-kpsrx" for this suite.
Dec 24 12:01:12.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:01:13.095: INFO: namespace: e2e-tests-deployment-kpsrx, resource: bindings, ignored listing per whitelist
Dec 24 12:01:13.163: INFO: namespace e2e-tests-deployment-kpsrx deletion completed in 8.446917435s

• [SLOW TEST:20.028 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:01:13.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:02:11.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-2qwtt" for this suite.
Dec 24 12:02:17.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:02:18.018: INFO: namespace: e2e-tests-container-runtime-2qwtt, resource: bindings, ignored listing per whitelist
Dec 24 12:02:18.134: INFO: namespace e2e-tests-container-runtime-2qwtt deletion completed in 6.291137243s

• [SLOW TEST:64.970 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:02:18.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 24 12:02:38.656: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 24 12:02:38.673: INFO: Pod pod-with-poststart-http-hook still exists
Dec 24 12:02:40.674: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 24 12:02:40.939: INFO: Pod pod-with-poststart-http-hook still exists
Dec 24 12:02:42.674: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 24 12:02:42.686: INFO: Pod pod-with-poststart-http-hook still exists
Dec 24 12:02:44.674: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 24 12:02:44.693: INFO: Pod pod-with-poststart-http-hook still exists
Dec 24 12:02:46.674: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 24 12:02:46.688: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:02:46.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7tkfb" for this suite.
Dec 24 12:03:11.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:03:11.519: INFO: namespace: e2e-tests-container-lifecycle-hook-7tkfb, resource: bindings, ignored listing per whitelist
Dec 24 12:03:11.607: INFO: namespace e2e-tests-container-lifecycle-hook-7tkfb deletion completed in 24.910022255s

• [SLOW TEST:53.471 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:03:11.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-5b8a90b4-2645-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 12:03:11.982: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b9ac1cf-2645-11ea-b7c4-0242ac110005" in namespace "e2e-tests-configmap-mk6v5" to be "success or failure"
Dec 24 12:03:11.995: INFO: Pod "pod-configmaps-5b9ac1cf-2645-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.820319ms
Dec 24 12:03:14.017: INFO: Pod "pod-configmaps-5b9ac1cf-2645-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034058598s
Dec 24 12:03:16.044: INFO: Pod "pod-configmaps-5b9ac1cf-2645-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061778517s
Dec 24 12:03:18.076: INFO: Pod "pod-configmaps-5b9ac1cf-2645-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093831021s
Dec 24 12:03:20.091: INFO: Pod "pod-configmaps-5b9ac1cf-2645-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10828198s
Dec 24 12:03:22.467: INFO: Pod "pod-configmaps-5b9ac1cf-2645-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.484841549s
STEP: Saw pod success
Dec 24 12:03:22.467: INFO: Pod "pod-configmaps-5b9ac1cf-2645-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:03:22.488: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5b9ac1cf-2645-11ea-b7c4-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 24 12:03:22.638: INFO: Waiting for pod pod-configmaps-5b9ac1cf-2645-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:03:22.645: INFO: Pod pod-configmaps-5b9ac1cf-2645-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:03:22.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mk6v5" for this suite.
Dec 24 12:03:28.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:03:28.888: INFO: namespace: e2e-tests-configmap-mk6v5, resource: bindings, ignored listing per whitelist
Dec 24 12:03:28.976: INFO: namespace e2e-tests-configmap-mk6v5 deletion completed in 6.24560372s

• [SLOW TEST:17.369 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:03:28.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 12:03:29.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 24 12:03:29.361: INFO: stderr: ""
Dec 24 12:03:29.361: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:03:29.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rk2f2" for this suite.
Dec 24 12:03:35.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:03:35.608: INFO: namespace: e2e-tests-kubectl-rk2f2, resource: bindings, ignored listing per whitelist
Dec 24 12:03:35.620: INFO: namespace e2e-tests-kubectl-rk2f2 deletion completed in 6.244582768s

• [SLOW TEST:6.643 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:03:35.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 12:03:35.797: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69ccefdb-2645-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-789d9" to be "success or failure"
Dec 24 12:03:35.805: INFO: Pod "downwardapi-volume-69ccefdb-2645-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014235ms
Dec 24 12:03:37.824: INFO: Pod "downwardapi-volume-69ccefdb-2645-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027062703s
Dec 24 12:03:39.864: INFO: Pod "downwardapi-volume-69ccefdb-2645-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067094739s
Dec 24 12:03:42.252: INFO: Pod "downwardapi-volume-69ccefdb-2645-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455221992s
Dec 24 12:03:44.272: INFO: Pod "downwardapi-volume-69ccefdb-2645-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.474756951s
Dec 24 12:03:46.290: INFO: Pod "downwardapi-volume-69ccefdb-2645-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.493066069s
STEP: Saw pod success
Dec 24 12:03:46.290: INFO: Pod "downwardapi-volume-69ccefdb-2645-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:03:46.296: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-69ccefdb-2645-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 12:03:46.389: INFO: Waiting for pod downwardapi-volume-69ccefdb-2645-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:03:47.197: INFO: Pod downwardapi-volume-69ccefdb-2645-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:03:47.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-789d9" for this suite.
Dec 24 12:03:53.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:03:53.662: INFO: namespace: e2e-tests-downward-api-789d9, resource: bindings, ignored listing per whitelist
Dec 24 12:03:53.667: INFO: namespace e2e-tests-downward-api-789d9 deletion completed in 6.45489437s

• [SLOW TEST:18.047 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:03:53.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 24 12:03:54.975: INFO: Pod name wrapped-volume-race-752456ad-2645-11ea-b7c4-0242ac110005: Found 0 pods out of 5
Dec 24 12:04:00.001: INFO: Pod name wrapped-volume-race-752456ad-2645-11ea-b7c4-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-752456ad-2645-11ea-b7c4-0242ac110005 in namespace e2e-tests-emptydir-wrapper-kdwg7, will wait for the garbage collector to delete the pods
Dec 24 12:05:42.256: INFO: Deleting ReplicationController wrapped-volume-race-752456ad-2645-11ea-b7c4-0242ac110005 took: 142.740323ms
Dec 24 12:05:42.557: INFO: Terminating ReplicationController wrapped-volume-race-752456ad-2645-11ea-b7c4-0242ac110005 pods took: 300.948276ms
STEP: Creating RC which spawns configmap-volume pods
Dec 24 12:06:33.949: INFO: Pod name wrapped-volume-race-d3f0b865-2645-11ea-b7c4-0242ac110005: Found 0 pods out of 5
Dec 24 12:06:38.969: INFO: Pod name wrapped-volume-race-d3f0b865-2645-11ea-b7c4-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d3f0b865-2645-11ea-b7c4-0242ac110005 in namespace e2e-tests-emptydir-wrapper-kdwg7, will wait for the garbage collector to delete the pods
Dec 24 12:09:05.330: INFO: Deleting ReplicationController wrapped-volume-race-d3f0b865-2645-11ea-b7c4-0242ac110005 took: 37.826377ms
Dec 24 12:09:05.631: INFO: Terminating ReplicationController wrapped-volume-race-d3f0b865-2645-11ea-b7c4-0242ac110005 pods took: 300.931389ms
STEP: Creating RC which spawns configmap-volume pods
Dec 24 12:09:53.148: INFO: Pod name wrapped-volume-race-4aa832b2-2646-11ea-b7c4-0242ac110005: Found 0 pods out of 5
Dec 24 12:09:58.176: INFO: Pod name wrapped-volume-race-4aa832b2-2646-11ea-b7c4-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4aa832b2-2646-11ea-b7c4-0242ac110005 in namespace e2e-tests-emptydir-wrapper-kdwg7, will wait for the garbage collector to delete the pods
Dec 24 12:12:12.314: INFO: Deleting ReplicationController wrapped-volume-race-4aa832b2-2646-11ea-b7c4-0242ac110005 took: 30.603319ms
Dec 24 12:12:12.615: INFO: Terminating ReplicationController wrapped-volume-race-4aa832b2-2646-11ea-b7c4-0242ac110005 pods took: 300.803703ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:13:05.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-kdwg7" for this suite.
Dec 24 12:13:15.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:13:15.682: INFO: namespace: e2e-tests-emptydir-wrapper-kdwg7, resource: bindings, ignored listing per whitelist
Dec 24 12:13:15.731: INFO: namespace e2e-tests-emptydir-wrapper-kdwg7 deletion completed in 10.391909188s

• [SLOW TEST:562.063 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:13:15.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Dec 24 12:13:30.680: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
Dec 24 12:15:03.198: INFO: Unexpected error occurred: timed out waiting for the condition
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-namespaces-qgdt5".
STEP: Found 0 events.
Dec 24 12:15:03.228: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Dec 24 12:15:03.229: INFO: test-pod-uninitialized                              hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 12:13:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 12:13:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 12:13:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 12:13:30 +0000 UTC  }]
Dec 24 12:15:03.229: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Dec 24 12:15:03.229: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Dec 24 12:15:03.229: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Dec 24 12:15:03.229: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Dec 24 12:15:03.229: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 20:28:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 20:28:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Dec 24 12:15:03.229: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Dec 24 12:15:03.229: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 20:28:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 20:28:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Dec 24 12:15:03.229: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:07:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:07:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Dec 24 12:15:03.229: INFO: 
Dec 24 12:15:03.238: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Dec 24 12:15:03.244: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:15901881,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-24 12:15:01 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-24 12:15:01 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-24 12:15:01 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-24 12:15:01 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx:latest] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Dec 24 12:15:03.245: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Dec 24 12:15:03.252: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Dec 24 12:15:03.290: INFO: test-pod-uninitialized started at 2019-12-24 12:13:30 +0000 UTC (0+1 container statuses recorded)
Dec 24 12:15:03.290: INFO: 	Container nginx ready: true, restart count 0
Dec 24 12:15:03.290: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Dec 24 12:15:03.290: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Dec 24 12:15:03.290: INFO: 	Container coredns ready: true, restart count 0
Dec 24 12:15:03.290: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Dec 24 12:15:03.290: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 24 12:15:03.290: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Dec 24 12:15:03.290: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Dec 24 12:15:03.290: INFO: 	Container weave ready: true, restart count 0
Dec 24 12:15:03.290: INFO: 	Container weave-npc ready: true, restart count 0
Dec 24 12:15:03.290: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Dec 24 12:15:03.290: INFO: 	Container coredns ready: true, restart count 0
Dec 24 12:15:03.290: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Dec 24 12:15:03.290: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
W1224 12:15:03.296281       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 12:15:03.385: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Dec 24 12:15:03.385: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:1m21.427513s}
Dec 24 12:15:03.385: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:1m4.227012s}
Dec 24 12:15:03.385: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:32.520176s}
Dec 24 12:15:03.385: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.5 Latency:12.165639s}
Dec 24 12:15:03.385: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.084169s}
Dec 24 12:15:03.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-qgdt5" for this suite.
Dec 24 12:15:09.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:15:09.522: INFO: namespace: e2e-tests-namespaces-qgdt5, resource: bindings, ignored listing per whitelist
Dec 24 12:15:09.608: INFO: namespace e2e-tests-namespaces-qgdt5 deletion completed in 6.212358406s
STEP: Destroying namespace "e2e-tests-nsdeletetest-plh2f" for this suite.
Dec 24 12:15:09.612: INFO: Couldn't delete ns: "e2e-tests-nsdeletetest-plh2f": Operation cannot be fulfilled on namespaces "e2e-tests-nsdeletetest-plh2f": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nsdeletetest-plh2f\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc00237f440), Code:409}})

• Failure [113.882 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc0000db8a0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:15:09.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-g6vrb in namespace e2e-tests-proxy-2rdsd
I1224 12:15:10.036989       8 runners.go:184] Created replication controller with name: proxy-service-g6vrb, namespace: e2e-tests-proxy-2rdsd, replica count: 1
I1224 12:15:11.087824       8 runners.go:184] proxy-service-g6vrb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 12:15:12.088131       8 runners.go:184] proxy-service-g6vrb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 12:15:13.088467       8 runners.go:184] proxy-service-g6vrb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 12:15:14.089123       8 runners.go:184] proxy-service-g6vrb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 12:15:15.089762       8 runners.go:184] proxy-service-g6vrb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 12:15:16.090193       8 runners.go:184] proxy-service-g6vrb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 12:15:17.090569       8 runners.go:184] proxy-service-g6vrb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 12:15:18.091013       8 runners.go:184] proxy-service-g6vrb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1224 12:15:19.091544       8 runners.go:184] proxy-service-g6vrb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1224 12:15:20.092250       8 runners.go:184] proxy-service-g6vrb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1224 12:15:21.092639       8 runners.go:184] proxy-service-g6vrb Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 24 12:15:21.104: INFO: setup took 11.235957147s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 24 12:15:21.136: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-2rdsd/pods/proxy-service-g6vrb-mg8c4:160/proxy/: foo (200; 31.437031ms)
Dec 24 12:15:21.137: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-2rdsd/pods/proxy-service-g6vrb-mg8c4/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 24 12:15:49.694: INFO: 10 pods remaining
Dec 24 12:15:49.694: INFO: 10 pods has nil DeletionTimestamp
Dec 24 12:15:49.694: INFO: 
Dec 24 12:15:51.218: INFO: 1 pods remaining
Dec 24 12:15:51.218: INFO: 0 pods has nil DeletionTimestamp
Dec 24 12:15:51.218: INFO: 
Dec 24 12:15:51.684: INFO: 0 pods remaining
Dec 24 12:15:51.685: INFO: 0 pods has nil DeletionTimestamp
Dec 24 12:15:51.685: INFO: 
STEP: Gathering metrics
W1224 12:15:52.733518       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 12:15:52.733: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:15:52.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-7j7lg" for this suite.
Dec 24 12:16:08.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:16:08.877: INFO: namespace: e2e-tests-gc-7j7lg, resource: bindings, ignored listing per whitelist
Dec 24 12:16:08.978: INFO: namespace e2e-tests-gc-7j7lg deletion completed in 16.225272429s

• [SLOW TEST:27.914 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:16:08.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-2ace35dd-2647-11ea-b7c4-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-2ace366a-2647-11ea-b7c4-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-2ace35dd-2647-11ea-b7c4-0242ac110005
STEP: Updating configmap cm-test-opt-upd-2ace366a-2647-11ea-b7c4-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-2ace3697-2647-11ea-b7c4-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:17:54.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h4lmw" for this suite.
Dec 24 12:18:18.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:18:18.685: INFO: namespace: e2e-tests-configmap-h4lmw, resource: bindings, ignored listing per whitelist
Dec 24 12:18:18.723: INFO: namespace e2e-tests-configmap-h4lmw deletion completed in 24.286992395s

• [SLOW TEST:129.743 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:18:18.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 24 12:18:18.906: INFO: Waiting up to 5m0s for pod "pod-782c3e3d-2647-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-hhpnb" to be "success or failure"
Dec 24 12:18:18.911: INFO: Pod "pod-782c3e3d-2647-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.968782ms
Dec 24 12:18:21.494: INFO: Pod "pod-782c3e3d-2647-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.587192812s
Dec 24 12:18:23.534: INFO: Pod "pod-782c3e3d-2647-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.627748961s
Dec 24 12:18:26.549: INFO: Pod "pod-782c3e3d-2647-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.642061997s
Dec 24 12:18:28.590: INFO: Pod "pod-782c3e3d-2647-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.683178701s
Dec 24 12:18:30.607: INFO: Pod "pod-782c3e3d-2647-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.700866779s
STEP: Saw pod success
Dec 24 12:18:30.608: INFO: Pod "pod-782c3e3d-2647-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:18:30.613: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-782c3e3d-2647-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 12:18:30.881: INFO: Waiting for pod pod-782c3e3d-2647-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:18:30.907: INFO: Pod pod-782c3e3d-2647-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:18:30.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hhpnb" for this suite.
Dec 24 12:18:37.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:18:38.135: INFO: namespace: e2e-tests-emptydir-hhpnb, resource: bindings, ignored listing per whitelist
Dec 24 12:18:38.247: INFO: namespace e2e-tests-emptydir-hhpnb deletion completed in 7.324045211s

• [SLOW TEST:19.524 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:18:38.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-xn7zh
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-xn7zh
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-xn7zh
Dec 24 12:18:38.700: INFO: Found 0 stateful pods, waiting for 1
Dec 24 12:18:48.754: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 24 12:18:48.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 12:18:49.477: INFO: stderr: ""
Dec 24 12:18:49.477: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 12:18:49.477: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 12:18:49.491: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 24 12:18:59.586: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 12:18:59.587: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 12:18:59.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999997095s
Dec 24 12:19:00.647: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988541005s
Dec 24 12:19:01.668: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.967928358s
Dec 24 12:19:02.693: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.946451673s
Dec 24 12:19:03.705: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.921932712s
Dec 24 12:19:06.375: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.909463649s
Dec 24 12:19:07.399: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.239489196s
Dec 24 12:19:08.420: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.21539829s
Dec 24 12:19:09.441: INFO: Verifying statefulset ss doesn't scale past 1 for another 194.328384ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-xn7zh
Dec 24 12:19:10.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:19:11.248: INFO: stderr: ""
Dec 24 12:19:11.248: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 12:19:11.248: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 12:19:11.265: INFO: Found 1 stateful pods, waiting for 3
Dec 24 12:19:21.285: INFO: Found 2 stateful pods, waiting for 3
Dec 24 12:19:31.393: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 12:19:31.393: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 12:19:31.393: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 24 12:19:41.286: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 12:19:41.286: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 12:19:41.286: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 24 12:19:41.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 12:19:41.992: INFO: stderr: ""
Dec 24 12:19:41.993: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 12:19:41.993: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 12:19:41.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 12:19:43.056: INFO: stderr: ""
Dec 24 12:19:43.056: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 12:19:43.056: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 12:19:43.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 12:19:43.568: INFO: stderr: ""
Dec 24 12:19:43.568: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 12:19:43.568: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 12:19:43.568: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 12:19:43.590: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 24 12:19:53.646: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 12:19:53.646: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 12:19:53.646: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 12:19:53.685: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998335s
Dec 24 12:19:54.703: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982870084s
Dec 24 12:19:55.720: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.965600576s
Dec 24 12:19:56.741: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.948355167s
Dec 24 12:19:57.762: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.926763s
Dec 24 12:19:58.776: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.906457826s
Dec 24 12:19:59.798: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.891815344s
Dec 24 12:20:00.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.870405103s
Dec 24 12:20:01.845: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.843358492s
Dec 24 12:20:03.748: INFO: Verifying statefulset ss doesn't scale past 3 for another 823.041651ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-xn7zh
Dec 24 12:20:04.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:20:05.356: INFO: stderr: ""
Dec 24 12:20:05.356: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 12:20:05.356: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 12:20:05.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:20:06.302: INFO: stderr: ""
Dec 24 12:20:06.302: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 12:20:06.302: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 12:20:06.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:20:06.737: INFO: rc: 126
Dec 24 12:20:06.737: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 command terminated with exit code 126
 []  0xc00251c4e0 exit status 126   true [0xc001bc60e8 0xc001bc6100 0xc001bc6118] [0xc001bc60e8 0xc001bc6100 0xc001bc6118] [0xc001bc60f8 0xc001bc6110] [0x935700 0x935700] 0xc001b95500 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
command terminated with exit code 126

error:
exit status 126

Dec 24 12:20:16.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:20:16.912: INFO: rc: 1
Dec 24 12:20:16.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00251c690 exit status 1   true [0xc001bc6120 0xc001bc6138 0xc001bc6150] [0xc001bc6120 0xc001bc6138 0xc001bc6150] [0xc001bc6130 0xc001bc6148] [0x935700 0x935700] 0xc001e5c060 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 24 12:20:26.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:20:27.067: INFO: rc: 1
Dec 24 12:20:27.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00251c7e0 exit status 1   true [0xc001bc6158 0xc001bc6170 0xc001bc6188] [0xc001bc6158 0xc001bc6170 0xc001bc6188] [0xc001bc6168 0xc001bc6180] [0x935700 0x935700] 0xc001e5c360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:20:37.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:20:37.247: INFO: rc: 1
Dec 24 12:20:37.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a17200 exit status 1   true [0xc00000fab0 0xc00000fb98 0xc00000fc40] [0xc00000fab0 0xc00000fb98 0xc00000fc40] [0xc00000fb78 0xc00000fc00] [0x935700 0x935700] 0xc0021af6e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:20:47.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:20:47.413: INFO: rc: 1
Dec 24 12:20:47.413: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000bb8120 exit status 1   true [0xc00242c000 0xc00242c018 0xc00242c030] [0xc00242c000 0xc00242c018 0xc00242c030] [0xc00242c010 0xc00242c028] [0x935700 0x935700] 0xc001c39260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:20:57.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:20:57.533: INFO: rc: 1
Dec 24 12:20:57.533: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a173e0 exit status 1   true [0xc00000fc80 0xc00000fd18 0xc00000fda8] [0xc00000fc80 0xc00000fd18 0xc00000fda8] [0xc00000fd08 0xc00000fd70] [0x935700 0x935700] 0xc0021afaa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:21:07.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:21:07.718: INFO: rc: 1
Dec 24 12:21:07.719: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160d470 exit status 1   true [0xc000c32100 0xc000c32118 0xc000c32130] [0xc000c32100 0xc000c32118 0xc000c32130] [0xc000c32110 0xc000c32128] [0x935700 0x935700] 0xc001627e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:21:17.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:21:17.951: INFO: rc: 1
Dec 24 12:21:17.951: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00251c930 exit status 1   true [0xc001bc6190 0xc001bc61a8 0xc001bc61c0] [0xc001bc6190 0xc001bc61a8 0xc001bc61c0] [0xc001bc61a0 0xc001bc61b8] [0x935700 0x935700] 0xc001e5c6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:21:27.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:21:28.100: INFO: rc: 1
Dec 24 12:21:28.100: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000bb8270 exit status 1   true [0xc00242c038 0xc00242c050 0xc00242c068] [0xc00242c038 0xc00242c050 0xc00242c068] [0xc00242c048 0xc00242c060] [0x935700 0x935700] 0xc001c39d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:21:38.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:21:38.218: INFO: rc: 1
Dec 24 12:21:38.218: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160d710 exit status 1   true [0xc000c32138 0xc000c32150 0xc000c32168] [0xc000c32138 0xc000c32150 0xc000c32168] [0xc000c32148 0xc000c32160] [0x935700 0x935700] 0xc000df00c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:21:48.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:21:48.339: INFO: rc: 1
Dec 24 12:21:48.339: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0011360f0 exit status 1   true [0xc000c32000 0xc000c32018 0xc000c32030] [0xc000c32000 0xc000c32018 0xc000c32030] [0xc000c32010 0xc000c32028] [0x935700 0x935700] 0xc001b94540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:21:58.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:21:58.536: INFO: rc: 1
Dec 24 12:21:58.536: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00106c120 exit status 1   true [0xc001bc6000 0xc001bc6018 0xc001bc6030] [0xc001bc6000 0xc001bc6018 0xc001bc6030] [0xc001bc6010 0xc001bc6028] [0x935700 0x935700] 0xc001ca5e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:22:08.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:22:08.800: INFO: rc: 1
Dec 24 12:22:08.801: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00106c270 exit status 1   true [0xc001bc6038 0xc001bc6050 0xc001bc6068] [0xc001bc6038 0xc001bc6050 0xc001bc6068] [0xc001bc6048 0xc001bc6060] [0x935700 0x935700] 0xc001627860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:22:18.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:22:18.943: INFO: rc: 1
Dec 24 12:22:18.943: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160c1e0 exit status 1   true [0xc00242c000 0xc00242c018 0xc00242c030] [0xc00242c000 0xc00242c018 0xc00242c030] [0xc00242c010 0xc00242c028] [0x935700 0x935700] 0xc000df0240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:22:28.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:22:29.085: INFO: rc: 1
Dec 24 12:22:29.086: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160c330 exit status 1   true [0xc00242c038 0xc00242c050 0xc00242c068] [0xc00242c038 0xc00242c050 0xc00242c068] [0xc00242c048 0xc00242c060] [0x935700 0x935700] 0xc000df0540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:22:39.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:22:39.203: INFO: rc: 1
Dec 24 12:22:39.204: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160c5a0 exit status 1   true [0xc00242c070 0xc00242c088 0xc00242c0a0] [0xc00242c070 0xc00242c088 0xc00242c0a0] [0xc00242c080 0xc00242c098] [0x935700 0x935700] 0xc000df07e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:22:49.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:22:49.404: INFO: rc: 1
Dec 24 12:22:49.405: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160c7b0 exit status 1   true [0xc00242c0a8 0xc00242c0c0 0xc00242c0d8] [0xc00242c0a8 0xc00242c0c0 0xc00242c0d8] [0xc00242c0b8 0xc00242c0d0] [0x935700 0x935700] 0xc000df0c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:22:59.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:22:59.568: INFO: rc: 1
Dec 24 12:22:59.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00251c240 exit status 1   true [0xc00000e010 0xc00000ec08 0xc00000ec78] [0xc00000e010 0xc00000ec08 0xc00000ec78] [0xc00000ebf8 0xc00000ec70] [0x935700 0x935700] 0xc001e5c240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:23:09.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:23:09.755: INFO: rc: 1
Dec 24 12:23:09.755: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160cd80 exit status 1   true [0xc00242c0e0 0xc00242c0f8 0xc00242c110] [0xc00242c0e0 0xc00242c0f8 0xc00242c110] [0xc00242c0f0 0xc00242c108] [0x935700 0x935700] 0xc000df1f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:23:19.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:23:19.930: INFO: rc: 1
Dec 24 12:23:19.931: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160cf00 exit status 1   true [0xc00242c118 0xc00242c130 0xc00242c148] [0xc00242c118 0xc00242c130 0xc00242c148] [0xc00242c128 0xc00242c140] [0x935700 0x935700] 0xc001c39740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:23:29.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:23:30.092: INFO: rc: 1
Dec 24 12:23:30.093: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160d050 exit status 1   true [0xc00242c150 0xc00242c168 0xc00242c180] [0xc00242c150 0xc00242c168 0xc00242c180] [0xc00242c160 0xc00242c178] [0x935700 0x935700] 0xc001c39da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:23:40.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:23:40.201: INFO: rc: 1
Dec 24 12:23:40.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00251c3f0 exit status 1   true [0xc00000ec88 0xc00000edf0 0xc00000f010] [0xc00000ec88 0xc00000edf0 0xc00000f010] [0xc00000ed28 0xc00000eef8] [0x935700 0x935700] 0xc001e5c4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:23:50.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:23:50.366: INFO: rc: 1
Dec 24 12:23:50.366: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00106c0f0 exit status 1   true [0xc000c32000 0xc000c32018 0xc000c32030] [0xc000c32000 0xc000c32018 0xc000c32030] [0xc000c32010 0xc000c32028] [0x935700 0x935700] 0xc001ca5e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:24:00.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:24:00.580: INFO: rc: 1
Dec 24 12:24:00.580: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001136150 exit status 1   true [0xc00000e010 0xc00000ec08 0xc00000ec78] [0xc00000e010 0xc00000ec08 0xc00000ec78] [0xc00000ebf8 0xc00000ec70] [0x935700 0x935700] 0xc000df0240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:24:10.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:24:10.692: INFO: rc: 1
Dec 24 12:24:10.692: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00106c2d0 exit status 1   true [0xc000c32038 0xc000c32058 0xc000c32070] [0xc000c32038 0xc000c32058 0xc000c32070] [0xc000c32050 0xc000c32068] [0x935700 0x935700] 0xc001c38b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:24:20.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:24:20.857: INFO: rc: 1
Dec 24 12:24:20.858: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160c180 exit status 1   true [0xc001bc6000 0xc001bc6018 0xc001bc6030] [0xc001bc6000 0xc001bc6018 0xc001bc6030] [0xc001bc6010 0xc001bc6028] [0x935700 0x935700] 0xc001627920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:24:30.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:24:30.997: INFO: rc: 1
Dec 24 12:24:30.998: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00106c3f0 exit status 1   true [0xc000c32078 0xc000c32098 0xc000c320b0] [0xc000c32078 0xc000c32098 0xc000c320b0] [0xc000c32090 0xc000c320a8] [0x935700 0x935700] 0xc001c39c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:24:40.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:24:41.136: INFO: rc: 1
Dec 24 12:24:41.137: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160c360 exit status 1   true [0xc001bc6038 0xc001bc6050 0xc001bc6068] [0xc001bc6038 0xc001bc6050 0xc001bc6068] [0xc001bc6048 0xc001bc6060] [0x935700 0x935700] 0xc001627bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:24:51.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:24:51.298: INFO: rc: 1
Dec 24 12:24:51.298: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00106c510 exit status 1   true [0xc000c320b8 0xc000c320d0 0xc000c320e8] [0xc000c320b8 0xc000c320d0 0xc000c320e8] [0xc000c320c8 0xc000c320e0] [0x935700 0x935700] 0xc001c39f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:25:01.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:25:01.460: INFO: rc: 1
Dec 24 12:25:01.460: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00160c720 exit status 1   true [0xc001bc6070 0xc001bc6088 0xc001bc60a0] [0xc001bc6070 0xc001bc6088 0xc001bc60a0] [0xc001bc6080 0xc001bc6098] [0x935700 0x935700] 0xc001627e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 24 12:25:11.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7zh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 12:25:11.623: INFO: rc: 1
Dec 24 12:25:11.623: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Dec 24 12:25:11.623: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 24 12:25:11.662: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xn7zh
Dec 24 12:25:11.667: INFO: Scaling statefulset ss to 0
Dec 24 12:25:11.688: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 12:25:11.692: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:25:11.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-xn7zh" for this suite.
Dec 24 12:25:19.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:25:20.258: INFO: namespace: e2e-tests-statefulset-xn7zh, resource: bindings, ignored listing per whitelist
Dec 24 12:25:20.380: INFO: namespace e2e-tests-statefulset-xn7zh deletion completed in 8.566174823s

• [SLOW TEST:402.133 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:25:20.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-73a7e59e-2648-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 12:25:20.889: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-73a8cec0-2648-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-tqxrx" to be "success or failure"
Dec 24 12:25:20.901: INFO: Pod "pod-projected-configmaps-73a8cec0-2648-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.719631ms
Dec 24 12:25:22.933: INFO: Pod "pod-projected-configmaps-73a8cec0-2648-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044033093s
Dec 24 12:25:24.957: INFO: Pod "pod-projected-configmaps-73a8cec0-2648-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067979839s
Dec 24 12:25:27.024: INFO: Pod "pod-projected-configmaps-73a8cec0-2648-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134719578s
Dec 24 12:25:29.048: INFO: Pod "pod-projected-configmaps-73a8cec0-2648-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159368651s
Dec 24 12:25:31.064: INFO: Pod "pod-projected-configmaps-73a8cec0-2648-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.175063319s
STEP: Saw pod success
Dec 24 12:25:31.064: INFO: Pod "pod-projected-configmaps-73a8cec0-2648-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:25:31.071: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-73a8cec0-2648-11ea-b7c4-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 24 12:25:31.229: INFO: Waiting for pod pod-projected-configmaps-73a8cec0-2648-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:25:31.245: INFO: Pod pod-projected-configmaps-73a8cec0-2648-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:25:31.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tqxrx" for this suite.
Dec 24 12:25:37.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:25:37.537: INFO: namespace: e2e-tests-projected-tqxrx, resource: bindings, ignored listing per whitelist
Dec 24 12:25:37.559: INFO: namespace e2e-tests-projected-tqxrx deletion completed in 6.302879797s

• [SLOW TEST:17.178 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:25:37.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-4wxk
STEP: Creating a pod to test atomic-volume-subpath
Dec 24 12:25:37.774: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4wxk" in namespace "e2e-tests-subpath-r2lmj" to be "success or failure"
Dec 24 12:25:37.792: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Pending", Reason="", readiness=false. Elapsed: 18.396715ms
Dec 24 12:25:39.992: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217922828s
Dec 24 12:25:42.008: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233693741s
Dec 24 12:25:44.074: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.300093985s
Dec 24 12:25:46.118: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.34447664s
Dec 24 12:25:48.142: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.368110776s
Dec 24 12:25:50.158: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.383815756s
Dec 24 12:25:52.504: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Running", Reason="", readiness=true. Elapsed: 14.730557292s
Dec 24 12:25:54.534: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Running", Reason="", readiness=false. Elapsed: 16.760116731s
Dec 24 12:25:56.602: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Running", Reason="", readiness=false. Elapsed: 18.827999498s
Dec 24 12:25:58.616: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Running", Reason="", readiness=false. Elapsed: 20.842486649s
Dec 24 12:26:00.671: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Running", Reason="", readiness=false. Elapsed: 22.896671743s
Dec 24 12:26:02.689: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Running", Reason="", readiness=false. Elapsed: 24.9150856s
Dec 24 12:26:04.703: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Running", Reason="", readiness=false. Elapsed: 26.929290894s
Dec 24 12:26:06.715: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Running", Reason="", readiness=false. Elapsed: 28.941053669s
Dec 24 12:26:08.752: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Running", Reason="", readiness=false. Elapsed: 30.977897507s
Dec 24 12:26:10.769: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Running", Reason="", readiness=false. Elapsed: 32.994762492s
Dec 24 12:26:12.839: INFO: Pod "pod-subpath-test-configmap-4wxk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.065159663s
STEP: Saw pod success
Dec 24 12:26:12.839: INFO: Pod "pod-subpath-test-configmap-4wxk" satisfied condition "success or failure"
Dec 24 12:26:12.852: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-4wxk container test-container-subpath-configmap-4wxk: 
STEP: delete the pod
Dec 24 12:26:14.035: INFO: Waiting for pod pod-subpath-test-configmap-4wxk to disappear
Dec 24 12:26:14.079: INFO: Pod pod-subpath-test-configmap-4wxk no longer exists
STEP: Deleting pod pod-subpath-test-configmap-4wxk
Dec 24 12:26:14.079: INFO: Deleting pod "pod-subpath-test-configmap-4wxk" in namespace "e2e-tests-subpath-r2lmj"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:26:14.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-r2lmj" for this suite.
Dec 24 12:26:20.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:26:20.465: INFO: namespace: e2e-tests-subpath-r2lmj, resource: bindings, ignored listing per whitelist
Dec 24 12:26:20.469: INFO: namespace e2e-tests-subpath-r2lmj deletion completed in 6.281414188s

• [SLOW TEST:42.910 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:26:20.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-9761653f-2648-11ea-b7c4-0242ac110005
STEP: Creating secret with name s-test-opt-upd-976166e5-2648-11ea-b7c4-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9761653f-2648-11ea-b7c4-0242ac110005
STEP: Updating secret s-test-opt-upd-976166e5-2648-11ea-b7c4-0242ac110005
STEP: Creating secret with name s-test-opt-create-97616715-2648-11ea-b7c4-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:27:53.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n8z5t" for this suite.
Dec 24 12:28:17.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:28:18.172: INFO: namespace: e2e-tests-projected-n8z5t, resource: bindings, ignored listing per whitelist
Dec 24 12:28:18.253: INFO: namespace e2e-tests-projected-n8z5t deletion completed in 24.47946074s

• [SLOW TEST:117.783 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:28:18.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-vf6s6
Dec 24 12:28:30.432: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-vf6s6
STEP: checking the pod's current state and verifying that restartCount is present
Dec 24 12:28:30.437: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:32:31.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-vf6s6" for this suite.
Dec 24 12:32:39.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:32:39.851: INFO: namespace: e2e-tests-container-probe-vf6s6, resource: bindings, ignored listing per whitelist
Dec 24 12:32:39.921: INFO: namespace e2e-tests-container-probe-vf6s6 deletion completed in 8.353457741s

• [SLOW TEST:261.667 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:32:39.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-797df104-2649-11ea-b7c4-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-797df104-2649-11ea-b7c4-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:33:54.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-thjv5" for this suite.
Dec 24 12:34:18.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:34:18.654: INFO: namespace: e2e-tests-projected-thjv5, resource: bindings, ignored listing per whitelist
Dec 24 12:34:18.755: INFO: namespace e2e-tests-projected-thjv5 deletion completed in 24.170481271s

• [SLOW TEST:98.833 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:34:18.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-x9k9q
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-x9k9q
STEP: Deleting pre-stop pod
Dec 24 12:34:44.588: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:34:44.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-x9k9q" for this suite.
Dec 24 12:35:24.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:35:24.909: INFO: namespace: e2e-tests-prestop-x9k9q, resource: bindings, ignored listing per whitelist
Dec 24 12:35:24.919: INFO: namespace e2e-tests-prestop-x9k9q deletion completed in 40.254068323s

• [SLOW TEST:66.164 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:35:24.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-dbdb2958-2649-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 12:35:25.170: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dbde8647-2649-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-bmj59" to be "success or failure"
Dec 24 12:35:25.178: INFO: Pod "pod-projected-configmaps-dbde8647-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.350987ms
Dec 24 12:35:27.522: INFO: Pod "pod-projected-configmaps-dbde8647-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351312801s
Dec 24 12:35:29.533: INFO: Pod "pod-projected-configmaps-dbde8647-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362628279s
Dec 24 12:35:31.547: INFO: Pod "pod-projected-configmaps-dbde8647-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.376187728s
Dec 24 12:35:33.567: INFO: Pod "pod-projected-configmaps-dbde8647-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.396214087s
Dec 24 12:35:35.632: INFO: Pod "pod-projected-configmaps-dbde8647-2649-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.461879536s
STEP: Saw pod success
Dec 24 12:35:35.633: INFO: Pod "pod-projected-configmaps-dbde8647-2649-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:35:35.653: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-dbde8647-2649-11ea-b7c4-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 24 12:35:36.016: INFO: Waiting for pod pod-projected-configmaps-dbde8647-2649-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:35:36.034: INFO: Pod pod-projected-configmaps-dbde8647-2649-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:35:36.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bmj59" for this suite.
Dec 24 12:35:42.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:35:42.365: INFO: namespace: e2e-tests-projected-bmj59, resource: bindings, ignored listing per whitelist
Dec 24 12:35:42.391: INFO: namespace e2e-tests-projected-bmj59 deletion completed in 6.338881253s

• [SLOW TEST:17.471 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:35:42.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-e64e59cf-2649-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 12:35:42.773: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e65907d9-2649-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-26clf" to be "success or failure"
Dec 24 12:35:42.794: INFO: Pod "pod-projected-configmaps-e65907d9-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.353047ms
Dec 24 12:35:44.833: INFO: Pod "pod-projected-configmaps-e65907d9-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059385627s
Dec 24 12:35:46.868: INFO: Pod "pod-projected-configmaps-e65907d9-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094110792s
Dec 24 12:35:49.318: INFO: Pod "pod-projected-configmaps-e65907d9-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.544274737s
Dec 24 12:35:51.379: INFO: Pod "pod-projected-configmaps-e65907d9-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.605856243s
Dec 24 12:35:53.400: INFO: Pod "pod-projected-configmaps-e65907d9-2649-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.626980677s
STEP: Saw pod success
Dec 24 12:35:53.401: INFO: Pod "pod-projected-configmaps-e65907d9-2649-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:35:53.416: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e65907d9-2649-11ea-b7c4-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 24 12:35:53.589: INFO: Waiting for pod pod-projected-configmaps-e65907d9-2649-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:35:53.605: INFO: Pod pod-projected-configmaps-e65907d9-2649-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:35:53.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-26clf" for this suite.
Dec 24 12:35:59.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:35:59.817: INFO: namespace: e2e-tests-projected-26clf, resource: bindings, ignored listing per whitelist
Dec 24 12:36:00.141: INFO: namespace e2e-tests-projected-26clf deletion completed in 6.516969901s

• [SLOW TEST:17.749 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:36:00.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f10190f1-2649-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 12:36:00.782: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f106e477-2649-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-4cxgn" to be "success or failure"
Dec 24 12:36:00.793: INFO: Pod "pod-projected-configmaps-f106e477-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.957532ms
Dec 24 12:36:02.808: INFO: Pod "pod-projected-configmaps-f106e477-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025607515s
Dec 24 12:36:04.829: INFO: Pod "pod-projected-configmaps-f106e477-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04732469s
Dec 24 12:36:06.856: INFO: Pod "pod-projected-configmaps-f106e477-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074047531s
Dec 24 12:36:08.882: INFO: Pod "pod-projected-configmaps-f106e477-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100281903s
Dec 24 12:36:10.896: INFO: Pod "pod-projected-configmaps-f106e477-2649-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113769564s
STEP: Saw pod success
Dec 24 12:36:10.896: INFO: Pod "pod-projected-configmaps-f106e477-2649-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:36:10.900: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f106e477-2649-11ea-b7c4-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 24 12:36:10.957: INFO: Waiting for pod pod-projected-configmaps-f106e477-2649-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:36:11.317: INFO: Pod pod-projected-configmaps-f106e477-2649-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:36:11.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4cxgn" for this suite.
Dec 24 12:36:19.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:36:19.452: INFO: namespace: e2e-tests-projected-4cxgn, resource: bindings, ignored listing per whitelist
Dec 24 12:36:19.579: INFO: namespace e2e-tests-projected-4cxgn deletion completed in 8.240760703s

• [SLOW TEST:19.437 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:36:19.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-fc7616ad-2649-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 12:36:19.950: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fc83a16a-2649-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-4nhwc" to be "success or failure"
Dec 24 12:36:19.978: INFO: Pod "pod-projected-configmaps-fc83a16a-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.698627ms
Dec 24 12:36:22.015: INFO: Pod "pod-projected-configmaps-fc83a16a-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06459972s
Dec 24 12:36:24.053: INFO: Pod "pod-projected-configmaps-fc83a16a-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102175654s
Dec 24 12:36:26.342: INFO: Pod "pod-projected-configmaps-fc83a16a-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391860469s
Dec 24 12:36:28.356: INFO: Pod "pod-projected-configmaps-fc83a16a-2649-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4055946s
Dec 24 12:36:30.376: INFO: Pod "pod-projected-configmaps-fc83a16a-2649-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.425550169s
STEP: Saw pod success
Dec 24 12:36:30.376: INFO: Pod "pod-projected-configmaps-fc83a16a-2649-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:36:30.385: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-fc83a16a-2649-11ea-b7c4-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 24 12:36:30.722: INFO: Waiting for pod pod-projected-configmaps-fc83a16a-2649-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:36:30.754: INFO: Pod pod-projected-configmaps-fc83a16a-2649-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:36:30.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4nhwc" for this suite.
Dec 24 12:36:36.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:36:36.986: INFO: namespace: e2e-tests-projected-4nhwc, resource: bindings, ignored listing per whitelist
Dec 24 12:36:37.011: INFO: namespace e2e-tests-projected-4nhwc deletion completed in 6.186937932s

• [SLOW TEST:17.432 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:36:37.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-06deee7b-264a-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 12:36:37.318: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-06dfa859-264a-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-rwqs8" to be "success or failure"
Dec 24 12:36:37.408: INFO: Pod "pod-projected-secrets-06dfa859-264a-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 90.147864ms
Dec 24 12:36:39.688: INFO: Pod "pod-projected-secrets-06dfa859-264a-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369758126s
Dec 24 12:36:41.710: INFO: Pod "pod-projected-secrets-06dfa859-264a-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.392081218s
Dec 24 12:36:44.066: INFO: Pod "pod-projected-secrets-06dfa859-264a-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.747323282s
Dec 24 12:36:46.580: INFO: Pod "pod-projected-secrets-06dfa859-264a-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.262000515s
Dec 24 12:36:48.640: INFO: Pod "pod-projected-secrets-06dfa859-264a-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.321289579s
STEP: Saw pod success
Dec 24 12:36:48.640: INFO: Pod "pod-projected-secrets-06dfa859-264a-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:36:48.662: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-06dfa859-264a-11ea-b7c4-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 24 12:36:48.881: INFO: Waiting for pod pod-projected-secrets-06dfa859-264a-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:36:48.913: INFO: Pod pod-projected-secrets-06dfa859-264a-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:36:48.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rwqs8" for this suite.
Dec 24 12:36:57.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:36:57.341: INFO: namespace: e2e-tests-projected-rwqs8, resource: bindings, ignored listing per whitelist
Dec 24 12:36:57.422: INFO: namespace e2e-tests-projected-rwqs8 deletion completed in 8.338638124s

• [SLOW TEST:20.410 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:36:57.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 12:36:57.771: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1300a39c-264a-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0020d2dd2), BlockOwnerDeletion:(*bool)(0xc0020d2dd3)}}
Dec 24 12:36:57.968: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"12fd8837-264a-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0020d2fb2), BlockOwnerDeletion:(*bool)(0xc0020d2fb3)}}
Dec 24 12:36:58.004: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"12ff24dc-264a-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001ee8992), BlockOwnerDeletion:(*bool)(0xc001ee8993)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:37:03.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-r47fx" for this suite.
Dec 24 12:37:09.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:37:09.478: INFO: namespace: e2e-tests-gc-r47fx, resource: bindings, ignored listing per whitelist
Dec 24 12:37:09.519: INFO: namespace e2e-tests-gc-r47fx deletion completed in 6.321743584s

• [SLOW TEST:12.096 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:37:09.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 24 12:37:18.437: INFO: Successfully updated pod "annotationupdate1a369023-264a-11ea-b7c4-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:37:22.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dwhl9" for this suite.
Dec 24 12:37:46.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:37:46.775: INFO: namespace: e2e-tests-downward-api-dwhl9, resource: bindings, ignored listing per whitelist
Dec 24 12:37:46.844: INFO: namespace e2e-tests-downward-api-dwhl9 deletion completed in 24.163566593s

• [SLOW TEST:37.325 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:37:46.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 12:37:47.003: INFO: Creating deployment "test-recreate-deployment"
Dec 24 12:37:47.023: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 24 12:37:47.072: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Dec 24 12:37:49.088: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 24 12:37:49.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 12:37:51.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 12:37:53.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 12:37:55.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 12:37:57.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712787867, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 12:37:59.128: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 24 12:37:59.169: INFO: Updating deployment test-recreate-deployment
Dec 24 12:37:59.169: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 24 12:38:00.609: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-xb2v5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xb2v5/deployments/test-recreate-deployment,UID:306c891b-264a-11ea-a994-fa163e34d433,ResourceVersion:15904356,Generation:2,CreationTimestamp:2019-12-24 12:37:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-24 12:37:59 +0000 UTC 2019-12-24 12:37:59 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-24 12:37:59 +0000 UTC 2019-12-24 12:37:47 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 24 12:38:00.650: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-xb2v5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xb2v5/replicasets/test-recreate-deployment-589c4bfd,UID:37cf0f71-264a-11ea-a994-fa163e34d433,ResourceVersion:15904354,Generation:1,CreationTimestamp:2019-12-24 12:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 306c891b-264a-11ea-a994-fa163e34d433 0xc00259fe8f 0xc00259fea0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 24 12:38:00.650: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 24 12:38:00.652: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-xb2v5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xb2v5/replicasets/test-recreate-deployment-5bf7f65dc,UID:3077e6a7-264a-11ea-a994-fa163e34d433,ResourceVersion:15904344,Generation:2,CreationTimestamp:2019-12-24 12:37:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 306c891b-264a-11ea-a994-fa163e34d433 0xc00259ff60 0xc00259ff61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 24 12:38:00.842: INFO: Pod "test-recreate-deployment-589c4bfd-7m4qq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-7m4qq,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-xb2v5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xb2v5/pods/test-recreate-deployment-589c4bfd-7m4qq,UID:37e13822-264a-11ea-a994-fa163e34d433,ResourceVersion:15904357,Generation:0,CreationTimestamp:2019-12-24 12:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 37cf0f71-264a-11ea-a994-fa163e34d433 0xc001aba88f 0xc001aba8a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7858h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7858h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7858h true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001aba900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001aba920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 12:37:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 12:37:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 12:37:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 12:37:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-24 12:37:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:38:00.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-xb2v5" for this suite.
Dec 24 12:38:12.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:38:13.025: INFO: namespace: e2e-tests-deployment-xb2v5, resource: bindings, ignored listing per whitelist
Dec 24 12:38:13.099: INFO: namespace e2e-tests-deployment-xb2v5 deletion completed in 12.233197843s

• [SLOW TEST:26.254 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:38:13.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-lrj2d
Dec 24 12:38:23.378: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-lrj2d
STEP: checking the pod's current state and verifying that restartCount is present
Dec 24 12:38:23.383: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:42:25.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-lrj2d" for this suite.
Dec 24 12:42:31.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:42:31.229: INFO: namespace: e2e-tests-container-probe-lrj2d, resource: bindings, ignored listing per whitelist
Dec 24 12:42:31.280: INFO: namespace e2e-tests-container-probe-lrj2d deletion completed in 6.233146984s

• [SLOW TEST:258.181 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:42:31.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-2nqj
STEP: Creating a pod to test atomic-volume-subpath
Dec 24 12:42:31.660: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2nqj" in namespace "e2e-tests-subpath-czsmq" to be "success or failure"
Dec 24 12:42:31.810: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Pending", Reason="", readiness=false. Elapsed: 150.778363ms
Dec 24 12:42:33.831: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171096517s
Dec 24 12:42:35.889: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2289692s
Dec 24 12:42:38.193: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.533361595s
Dec 24 12:42:40.244: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.584846989s
Dec 24 12:42:42.271: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61117253s
Dec 24 12:42:44.290: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.630445347s
Dec 24 12:42:46.506: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.846874501s
Dec 24 12:42:48.552: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.892835762s
Dec 24 12:42:50.590: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Running", Reason="", readiness=false. Elapsed: 18.93090677s
Dec 24 12:42:52.657: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Running", Reason="", readiness=false. Elapsed: 20.997561807s
Dec 24 12:42:54.678: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Running", Reason="", readiness=false. Elapsed: 23.0184751s
Dec 24 12:42:56.704: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Running", Reason="", readiness=false. Elapsed: 25.044715802s
Dec 24 12:42:58.718: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Running", Reason="", readiness=false. Elapsed: 27.058542623s
Dec 24 12:43:00.738: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Running", Reason="", readiness=false. Elapsed: 29.078342095s
Dec 24 12:43:02.758: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Running", Reason="", readiness=false. Elapsed: 31.098645767s
Dec 24 12:43:04.779: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Running", Reason="", readiness=false. Elapsed: 33.118946456s
Dec 24 12:43:06.808: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Running", Reason="", readiness=false. Elapsed: 35.148236225s
Dec 24 12:43:08.943: INFO: Pod "pod-subpath-test-configmap-2nqj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.283753791s
STEP: Saw pod success
Dec 24 12:43:08.943: INFO: Pod "pod-subpath-test-configmap-2nqj" satisfied condition "success or failure"
Dec 24 12:43:08.952: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-2nqj container test-container-subpath-configmap-2nqj: 
STEP: delete the pod
Dec 24 12:43:09.120: INFO: Waiting for pod pod-subpath-test-configmap-2nqj to disappear
Dec 24 12:43:09.138: INFO: Pod pod-subpath-test-configmap-2nqj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-2nqj
Dec 24 12:43:09.138: INFO: Deleting pod "pod-subpath-test-configmap-2nqj" in namespace "e2e-tests-subpath-czsmq"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:43:09.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-czsmq" for this suite.
Dec 24 12:43:15.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:43:15.219: INFO: namespace: e2e-tests-subpath-czsmq, resource: bindings, ignored listing per whitelist
Dec 24 12:43:15.462: INFO: namespace e2e-tests-subpath-czsmq deletion completed in 6.306124558s

• [SLOW TEST:44.182 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:43:15.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 24 12:43:15.766: INFO: Waiting up to 5m0s for pod "downward-api-f45d85aa-264a-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-ctmxb" to be "success or failure"
Dec 24 12:43:15.823: INFO: Pod "downward-api-f45d85aa-264a-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 56.557511ms
Dec 24 12:43:17.839: INFO: Pod "downward-api-f45d85aa-264a-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072542027s
Dec 24 12:43:19.869: INFO: Pod "downward-api-f45d85aa-264a-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102022013s
Dec 24 12:43:22.111: INFO: Pod "downward-api-f45d85aa-264a-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.344905194s
Dec 24 12:43:24.133: INFO: Pod "downward-api-f45d85aa-264a-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.366042193s
Dec 24 12:43:27.464: INFO: Pod "downward-api-f45d85aa-264a-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.697762814s
STEP: Saw pod success
Dec 24 12:43:27.465: INFO: Pod "downward-api-f45d85aa-264a-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:43:27.716: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f45d85aa-264a-11ea-b7c4-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 24 12:43:27.853: INFO: Waiting for pod downward-api-f45d85aa-264a-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:43:27.870: INFO: Pod downward-api-f45d85aa-264a-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:43:27.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ctmxb" for this suite.
Dec 24 12:43:34.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:43:34.139: INFO: namespace: e2e-tests-downward-api-ctmxb, resource: bindings, ignored listing per whitelist
Dec 24 12:43:34.195: INFO: namespace e2e-tests-downward-api-ctmxb deletion completed in 6.315381148s

• [SLOW TEST:18.733 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:43:34.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 24 12:43:34.548: INFO: namespace e2e-tests-kubectl-zr55j
Dec 24 12:43:34.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zr55j'
Dec 24 12:43:36.718: INFO: stderr: ""
Dec 24 12:43:36.718: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 24 12:43:38.257: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:43:38.257: INFO: Found 0 / 1
Dec 24 12:43:38.883: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:43:38.883: INFO: Found 0 / 1
Dec 24 12:43:39.833: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:43:39.833: INFO: Found 0 / 1
Dec 24 12:43:40.737: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:43:40.737: INFO: Found 0 / 1
Dec 24 12:43:41.734: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:43:41.734: INFO: Found 0 / 1
Dec 24 12:43:42.919: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:43:42.919: INFO: Found 0 / 1
Dec 24 12:43:43.744: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:43:43.745: INFO: Found 0 / 1
Dec 24 12:43:44.866: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:43:44.866: INFO: Found 0 / 1
Dec 24 12:43:45.731: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:43:45.731: INFO: Found 0 / 1
Dec 24 12:43:46.742: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:43:46.742: INFO: Found 1 / 1
Dec 24 12:43:46.742: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 24 12:43:46.752: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:43:46.752: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 24 12:43:46.752: INFO: wait on redis-master startup in e2e-tests-kubectl-zr55j 
Dec 24 12:43:46.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-z5qp5 redis-master --namespace=e2e-tests-kubectl-zr55j'
Dec 24 12:43:47.012: INFO: stderr: ""
Dec 24 12:43:47.012: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Dec 12:43:45.219 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Dec 12:43:45.219 # Server started, Redis version 3.2.12\n1:M 24 Dec 12:43:45.219 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Dec 12:43:45.219 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 24 12:43:47.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-zr55j'
Dec 24 12:43:47.347: INFO: stderr: ""
Dec 24 12:43:47.347: INFO: stdout: "service/rm2 exposed\n"
Dec 24 12:43:47.431: INFO: Service rm2 in namespace e2e-tests-kubectl-zr55j found.
STEP: exposing service
Dec 24 12:43:49.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-zr55j'
Dec 24 12:43:49.829: INFO: stderr: ""
Dec 24 12:43:49.830: INFO: stdout: "service/rm3 exposed\n"
Dec 24 12:43:49.873: INFO: Service rm3 in namespace e2e-tests-kubectl-zr55j found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:43:51.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zr55j" for this suite.
Dec 24 12:44:15.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:44:16.161: INFO: namespace: e2e-tests-kubectl-zr55j, resource: bindings, ignored listing per whitelist
Dec 24 12:44:16.173: INFO: namespace e2e-tests-kubectl-zr55j deletion completed in 24.259889651s

• [SLOW TEST:41.978 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:44:16.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-18ab1569-264b-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 12:44:16.923: INFO: Waiting up to 5m0s for pod "pod-secrets-18d34eab-264b-11ea-b7c4-0242ac110005" in namespace "e2e-tests-secrets-j484c" to be "success or failure"
Dec 24 12:44:16.953: INFO: Pod "pod-secrets-18d34eab-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.836482ms
Dec 24 12:44:18.971: INFO: Pod "pod-secrets-18d34eab-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048180336s
Dec 24 12:44:20.988: INFO: Pod "pod-secrets-18d34eab-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064906955s
Dec 24 12:44:23.001: INFO: Pod "pod-secrets-18d34eab-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078603877s
Dec 24 12:44:25.035: INFO: Pod "pod-secrets-18d34eab-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111869168s
Dec 24 12:44:27.082: INFO: Pod "pod-secrets-18d34eab-264b-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.159423563s
STEP: Saw pod success
Dec 24 12:44:27.082: INFO: Pod "pod-secrets-18d34eab-264b-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:44:27.092: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-18d34eab-264b-11ea-b7c4-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 24 12:44:27.311: INFO: Waiting for pod pod-secrets-18d34eab-264b-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:44:27.339: INFO: Pod pod-secrets-18d34eab-264b-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:44:27.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-j484c" for this suite.
Dec 24 12:44:35.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:44:35.620: INFO: namespace: e2e-tests-secrets-j484c, resource: bindings, ignored listing per whitelist
Dec 24 12:44:35.695: INFO: namespace e2e-tests-secrets-j484c deletion completed in 8.321757652s
STEP: Destroying namespace "e2e-tests-secret-namespace-kpj8j" for this suite.
Dec 24 12:44:41.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:44:41.834: INFO: namespace: e2e-tests-secret-namespace-kpj8j, resource: bindings, ignored listing per whitelist
Dec 24 12:44:41.944: INFO: namespace e2e-tests-secret-namespace-kpj8j deletion completed in 6.248834866s

• [SLOW TEST:25.771 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:44:41.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 24 12:44:42.156: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:44:58.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-bfnxf" for this suite.
Dec 24 12:45:04.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:45:04.407: INFO: namespace: e2e-tests-init-container-bfnxf, resource: bindings, ignored listing per whitelist
Dec 24 12:45:04.715: INFO: namespace e2e-tests-init-container-bfnxf deletion completed in 6.460219638s

• [SLOW TEST:22.770 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:45:04.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:45:15.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-h4zkc" for this suite.
Dec 24 12:45:59.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:45:59.285: INFO: namespace: e2e-tests-kubelet-test-h4zkc, resource: bindings, ignored listing per whitelist
Dec 24 12:45:59.432: INFO: namespace e2e-tests-kubelet-test-h4zkc deletion completed in 44.261605879s

• [SLOW TEST:54.716 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:45:59.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-561f850f-264b-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 12:45:59.781: INFO: Waiting up to 5m0s for pod "pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005" in namespace "e2e-tests-secrets-zv2ld" to be "success or failure"
Dec 24 12:45:59.790: INFO: Pod "pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.069963ms
Dec 24 12:46:01.829: INFO: Pod "pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048416186s
Dec 24 12:46:03.888: INFO: Pod "pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106892891s
Dec 24 12:46:06.199: INFO: Pod "pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418391807s
Dec 24 12:46:08.230: INFO: Pod "pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.449165329s
Dec 24 12:46:10.323: INFO: Pod "pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.542715278s
Dec 24 12:46:12.530: INFO: Pod "pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.749065872s
STEP: Saw pod success
Dec 24 12:46:12.530: INFO: Pod "pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:46:12.555: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 24 12:46:12.713: INFO: Waiting for pod pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:46:12.849: INFO: Pod pod-secrets-562072f9-264b-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:46:12.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zv2ld" for this suite.
Dec 24 12:46:18.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:46:18.956: INFO: namespace: e2e-tests-secrets-zv2ld, resource: bindings, ignored listing per whitelist
Dec 24 12:46:19.111: INFO: namespace e2e-tests-secrets-zv2ld deletion completed in 6.223003691s

• [SLOW TEST:19.677 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:46:19.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Dec 24 12:46:19.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r25b8'
Dec 24 12:46:19.988: INFO: stderr: ""
Dec 24 12:46:19.988: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Dec 24 12:46:21.011: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:21.011: INFO: Found 0 / 1
Dec 24 12:46:22.015: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:22.015: INFO: Found 0 / 1
Dec 24 12:46:23.091: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:23.091: INFO: Found 0 / 1
Dec 24 12:46:24.026: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:24.026: INFO: Found 0 / 1
Dec 24 12:46:24.997: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:24.997: INFO: Found 0 / 1
Dec 24 12:46:26.193: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:26.193: INFO: Found 0 / 1
Dec 24 12:46:27.011: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:27.011: INFO: Found 0 / 1
Dec 24 12:46:28.107: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:28.107: INFO: Found 0 / 1
Dec 24 12:46:29.067: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:29.067: INFO: Found 0 / 1
Dec 24 12:46:30.027: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:30.027: INFO: Found 0 / 1
Dec 24 12:46:31.005: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:31.005: INFO: Found 1 / 1
Dec 24 12:46:31.005: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 24 12:46:31.014: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 12:46:31.015: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 24 12:46:31.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7n7wc redis-master --namespace=e2e-tests-kubectl-r25b8'
Dec 24 12:46:31.192: INFO: stderr: ""
Dec 24 12:46:31.193: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Dec 12:46:28.720 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Dec 12:46:28.721 # Server started, Redis version 3.2.12\n1:M 24 Dec 12:46:28.721 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Dec 12:46:28.721 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 24 12:46:31.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7n7wc redis-master --namespace=e2e-tests-kubectl-r25b8 --tail=1'
Dec 24 12:46:31.368: INFO: stderr: ""
Dec 24 12:46:31.368: INFO: stdout: "1:M 24 Dec 12:46:28.721 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 24 12:46:31.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7n7wc redis-master --namespace=e2e-tests-kubectl-r25b8 --limit-bytes=1'
Dec 24 12:46:31.517: INFO: stderr: ""
Dec 24 12:46:31.517: INFO: stdout: " "
STEP: exposing timestamps
Dec 24 12:46:31.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7n7wc redis-master --namespace=e2e-tests-kubectl-r25b8 --tail=1 --timestamps'
Dec 24 12:46:31.709: INFO: stderr: ""
Dec 24 12:46:31.709: INFO: stdout: "2019-12-24T12:46:28.723423256Z 1:M 24 Dec 12:46:28.721 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 24 12:46:34.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7n7wc redis-master --namespace=e2e-tests-kubectl-r25b8 --since=1s'
Dec 24 12:46:34.428: INFO: stderr: ""
Dec 24 12:46:34.428: INFO: stdout: ""
Dec 24 12:46:34.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7n7wc redis-master --namespace=e2e-tests-kubectl-r25b8 --since=24h'
Dec 24 12:46:34.679: INFO: stderr: ""
Dec 24 12:46:34.679: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Dec 12:46:28.720 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Dec 12:46:28.721 # Server started, Redis version 3.2.12\n1:M 24 Dec 12:46:28.721 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Dec 12:46:28.721 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Dec 24 12:46:34.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r25b8'
Dec 24 12:46:34.816: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 12:46:34.816: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 24 12:46:34.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-r25b8'
Dec 24 12:46:34.969: INFO: stderr: "No resources found.\n"
Dec 24 12:46:34.969: INFO: stdout: ""
Dec 24 12:46:34.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-r25b8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 24 12:46:35.173: INFO: stderr: ""
Dec 24 12:46:35.173: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:46:35.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-r25b8" for this suite.
Dec 24 12:46:59.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:46:59.493: INFO: namespace: e2e-tests-kubectl-r25b8, resource: bindings, ignored listing per whitelist
Dec 24 12:46:59.601: INFO: namespace e2e-tests-kubectl-r25b8 deletion completed in 24.416300981s

• [SLOW TEST:40.490 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:46:59.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 24 12:46:59.771: INFO: PodSpec: initContainers in spec.initContainers
Dec 24 12:48:08.806: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-79e5ac9e-264b-11ea-b7c4-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-lnhvn", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-lnhvn/pods/pod-init-79e5ac9e-264b-11ea-b7c4-0242ac110005", UID:"79e7031d-264b-11ea-a994-fa163e34d433", ResourceVersion:"15905412", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712788419, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"771934464"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-sbdcb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002862540), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sbdcb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sbdcb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sbdcb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0029a40f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028bc000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029a4170)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029a4190)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0029a4198), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0029a419c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712788419, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712788419, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712788419, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712788419, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002a1a040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002942070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029420e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://7340ff9c198beb6cdca501b876f1f69b0632e97f985e1aa915604ae482a0994d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a1a080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a1a060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:48:08.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-lnhvn" for this suite.
Dec 24 12:48:32.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:48:33.018: INFO: namespace: e2e-tests-init-container-lnhvn, resource: bindings, ignored listing per whitelist
Dec 24 12:48:33.035: INFO: namespace e2e-tests-init-container-lnhvn deletion completed in 24.20332916s

• [SLOW TEST:93.433 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:48:33.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 24 12:48:33.147: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:48:33.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6qgsl" for this suite.
Dec 24 12:48:39.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:48:39.416: INFO: namespace: e2e-tests-kubectl-6qgsl, resource: bindings, ignored listing per whitelist
Dec 24 12:48:39.487: INFO: namespace e2e-tests-kubectl-6qgsl deletion completed in 6.220609198s

• [SLOW TEST:6.452 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:48:39.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 12:48:39.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-gkqr5'
Dec 24 12:48:39.729: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 24 12:48:39.729: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Dec 24 12:48:39.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-gkqr5'
Dec 24 12:48:40.036: INFO: stderr: ""
Dec 24 12:48:40.036: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:48:40.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gkqr5" for this suite.
Dec 24 12:48:48.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:48:48.235: INFO: namespace: e2e-tests-kubectl-gkqr5, resource: bindings, ignored listing per whitelist
Dec 24 12:48:48.335: INFO: namespace e2e-tests-kubectl-gkqr5 deletion completed in 8.256982161s

• [SLOW TEST:8.847 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:48:48.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 24 12:48:48.728: INFO: Waiting up to 5m0s for pod "pod-bad48239-264b-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-kpbnv" to be "success or failure"
Dec 24 12:48:48.755: INFO: Pod "pod-bad48239-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.233449ms
Dec 24 12:48:50.847: INFO: Pod "pod-bad48239-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119330234s
Dec 24 12:48:52.876: INFO: Pod "pod-bad48239-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148013506s
Dec 24 12:48:54.924: INFO: Pod "pod-bad48239-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195697451s
Dec 24 12:48:56.939: INFO: Pod "pod-bad48239-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21106328s
Dec 24 12:48:58.953: INFO: Pod "pod-bad48239-264b-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.225499107s
STEP: Saw pod success
Dec 24 12:48:58.953: INFO: Pod "pod-bad48239-264b-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:48:58.959: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bad48239-264b-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 12:48:59.892: INFO: Waiting for pod pod-bad48239-264b-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:49:00.174: INFO: Pod pod-bad48239-264b-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:49:00.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kpbnv" for this suite.
Dec 24 12:49:06.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:49:06.468: INFO: namespace: e2e-tests-emptydir-kpbnv, resource: bindings, ignored listing per whitelist
Dec 24 12:49:06.589: INFO: namespace e2e-tests-emptydir-kpbnv deletion completed in 6.397264774s

• [SLOW TEST:18.253 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:49:06.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 24 12:49:07.096: INFO: Waiting up to 5m0s for pod "client-containers-c5c46a84-264b-11ea-b7c4-0242ac110005" in namespace "e2e-tests-containers-qxrg5" to be "success or failure"
Dec 24 12:49:07.111: INFO: Pod "client-containers-c5c46a84-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.765304ms
Dec 24 12:49:09.360: INFO: Pod "client-containers-c5c46a84-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26369551s
Dec 24 12:49:11.384: INFO: Pod "client-containers-c5c46a84-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287683125s
Dec 24 12:49:13.667: INFO: Pod "client-containers-c5c46a84-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.570817785s
Dec 24 12:49:15.683: INFO: Pod "client-containers-c5c46a84-264b-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.58651236s
Dec 24 12:49:17.698: INFO: Pod "client-containers-c5c46a84-264b-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.601628047s
STEP: Saw pod success
Dec 24 12:49:17.698: INFO: Pod "client-containers-c5c46a84-264b-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:49:17.703: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-c5c46a84-264b-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 12:49:17.772: INFO: Waiting for pod client-containers-c5c46a84-264b-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:49:17.789: INFO: Pod client-containers-c5c46a84-264b-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:49:17.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-qxrg5" for this suite.
Dec 24 12:49:23.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:49:24.000: INFO: namespace: e2e-tests-containers-qxrg5, resource: bindings, ignored listing per whitelist
Dec 24 12:49:24.113: INFO: namespace e2e-tests-containers-qxrg5 deletion completed in 6.295110048s

• [SLOW TEST:17.523 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:49:24.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 24 12:49:24.498: INFO: Number of nodes with available pods: 0
Dec 24 12:49:24.498: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:49:25.945: INFO: Number of nodes with available pods: 0
Dec 24 12:49:25.946: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:49:26.787: INFO: Number of nodes with available pods: 0
Dec 24 12:49:26.787: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:49:27.527: INFO: Number of nodes with available pods: 0
Dec 24 12:49:27.527: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:49:28.586: INFO: Number of nodes with available pods: 0
Dec 24 12:49:28.586: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:49:29.917: INFO: Number of nodes with available pods: 0
Dec 24 12:49:29.917: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:49:30.535: INFO: Number of nodes with available pods: 0
Dec 24 12:49:30.535: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:49:31.642: INFO: Number of nodes with available pods: 0
Dec 24 12:49:31.642: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:49:32.595: INFO: Number of nodes with available pods: 0
Dec 24 12:49:32.595: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:49:33.640: INFO: Number of nodes with available pods: 0
Dec 24 12:49:33.640: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:49:34.540: INFO: Number of nodes with available pods: 1
Dec 24 12:49:34.540: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 24 12:49:34.706: INFO: Number of nodes with available pods: 1
Dec 24 12:49:34.706: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lvgmd, will wait for the garbage collector to delete the pods
Dec 24 12:49:37.209: INFO: Deleting DaemonSet.extensions daemon-set took: 17.319381ms
Dec 24 12:49:38.310: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.101016567s
Dec 24 12:49:42.672: INFO: Number of nodes with available pods: 0
Dec 24 12:49:42.672: INFO: Number of running nodes: 0, number of available pods: 0
Dec 24 12:49:42.678: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lvgmd/daemonsets","resourceVersion":"15905659"},"items":null}

Dec 24 12:49:42.680: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lvgmd/pods","resourceVersion":"15905659"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:49:42.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-lvgmd" for this suite.
Dec 24 12:49:48.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:49:48.814: INFO: namespace: e2e-tests-daemonsets-lvgmd, resource: bindings, ignored listing per whitelist
Dec 24 12:49:48.848: INFO: namespace e2e-tests-daemonsets-lvgmd deletion completed in 6.155846297s

• [SLOW TEST:24.735 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:49:48.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 12:49:49.214: INFO: Creating ReplicaSet my-hostname-basic-dee47fea-264b-11ea-b7c4-0242ac110005
Dec 24 12:49:49.236: INFO: Pod name my-hostname-basic-dee47fea-264b-11ea-b7c4-0242ac110005: Found 0 pods out of 1
Dec 24 12:49:54.256: INFO: Pod name my-hostname-basic-dee47fea-264b-11ea-b7c4-0242ac110005: Found 1 pods out of 1
Dec 24 12:49:54.256: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-dee47fea-264b-11ea-b7c4-0242ac110005" is running
Dec 24 12:50:00.280: INFO: Pod "my-hostname-basic-dee47fea-264b-11ea-b7c4-0242ac110005-jq4j5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 12:49:49 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 12:49:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-dee47fea-264b-11ea-b7c4-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 12:49:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-dee47fea-264b-11ea-b7c4-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 12:49:49 +0000 UTC Reason: Message:}])
Dec 24 12:50:00.280: INFO: Trying to dial the pod
Dec 24 12:50:05.337: INFO: Controller my-hostname-basic-dee47fea-264b-11ea-b7c4-0242ac110005: Got expected result from replica 1 [my-hostname-basic-dee47fea-264b-11ea-b7c4-0242ac110005-jq4j5]: "my-hostname-basic-dee47fea-264b-11ea-b7c4-0242ac110005-jq4j5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:50:05.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-w9b5j" for this suite.
Dec 24 12:50:13.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:50:13.537: INFO: namespace: e2e-tests-replicaset-w9b5j, resource: bindings, ignored listing per whitelist
Dec 24 12:50:13.615: INFO: namespace e2e-tests-replicaset-w9b5j deletion completed in 8.267688696s

• [SLOW TEST:24.767 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:50:13.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-65rm7
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-65rm7 to expose endpoints map[]
Dec 24 12:50:14.933: INFO: Get endpoints failed (20.050638ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 24 12:50:15.957: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-65rm7 exposes endpoints map[] (1.043371534s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-65rm7
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-65rm7 to expose endpoints map[pod1:[100]]
Dec 24 12:50:20.944: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.963299782s elapsed, will retry)
Dec 24 12:50:26.777: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-65rm7 exposes endpoints map[pod1:[100]] (10.796696839s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-65rm7
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-65rm7 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 24 12:50:31.120: INFO: Unexpected endpoints: found map[eed7c780-264b-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.319719192s elapsed, will retry)
Dec 24 12:50:36.665: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-65rm7 exposes endpoints map[pod1:[100] pod2:[101]] (9.864399001s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-65rm7
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-65rm7 to expose endpoints map[pod2:[101]]
Dec 24 12:50:38.618: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-65rm7 exposes endpoints map[pod2:[101]] (1.944515223s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-65rm7
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-65rm7 to expose endpoints map[]
Dec 24 12:50:40.181: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-65rm7 exposes endpoints map[] (1.19439465s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:50:40.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-65rm7" for this suite.
Dec 24 12:51:04.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:51:05.006: INFO: namespace: e2e-tests-services-65rm7, resource: bindings, ignored listing per whitelist
Dec 24 12:51:05.087: INFO: namespace e2e-tests-services-65rm7 deletion completed in 24.212743354s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:51.470 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:51:05.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-7vvtx/secret-test-0c49c3b4-264c-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 12:51:05.435: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c4c24fa-264c-11ea-b7c4-0242ac110005" in namespace "e2e-tests-secrets-7vvtx" to be "success or failure"
Dec 24 12:51:05.450: INFO: Pod "pod-configmaps-0c4c24fa-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.452617ms
Dec 24 12:51:07.478: INFO: Pod "pod-configmaps-0c4c24fa-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043092405s
Dec 24 12:51:09.487: INFO: Pod "pod-configmaps-0c4c24fa-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051280607s
Dec 24 12:51:11.929: INFO: Pod "pod-configmaps-0c4c24fa-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493215636s
Dec 24 12:51:13.953: INFO: Pod "pod-configmaps-0c4c24fa-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517433081s
Dec 24 12:51:16.000: INFO: Pod "pod-configmaps-0c4c24fa-264c-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.564222618s
STEP: Saw pod success
Dec 24 12:51:16.000: INFO: Pod "pod-configmaps-0c4c24fa-264c-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:51:16.016: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0c4c24fa-264c-11ea-b7c4-0242ac110005 container env-test: 
STEP: delete the pod
Dec 24 12:51:16.566: INFO: Waiting for pod pod-configmaps-0c4c24fa-264c-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:51:16.579: INFO: Pod pod-configmaps-0c4c24fa-264c-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:51:16.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7vvtx" for this suite.
Dec 24 12:51:23.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:51:23.346: INFO: namespace: e2e-tests-secrets-7vvtx, resource: bindings, ignored listing per whitelist
Dec 24 12:51:23.356: INFO: namespace e2e-tests-secrets-7vvtx deletion completed in 6.760356301s

• [SLOW TEST:18.268 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:51:23.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-1747fd03-264c-11ea-b7c4-0242ac110005
STEP: Creating secret with name s-test-opt-upd-1747ffc6-264c-11ea-b7c4-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-1747fd03-264c-11ea-b7c4-0242ac110005
STEP: Updating secret s-test-opt-upd-1747ffc6-264c-11ea-b7c4-0242ac110005
STEP: Creating secret with name s-test-opt-create-1748004b-264c-11ea-b7c4-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:53:00.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2vnrt" for this suite.
Dec 24 12:53:24.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:53:24.399: INFO: namespace: e2e-tests-secrets-2vnrt, resource: bindings, ignored listing per whitelist
Dec 24 12:53:24.428: INFO: namespace e2e-tests-secrets-2vnrt deletion completed in 24.221053667s

• [SLOW TEST:121.072 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:53:24.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-5f4f93f0-264c-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 12:53:24.756: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f50d253-264c-11ea-b7c4-0242ac110005" in namespace "e2e-tests-configmap-h6jlh" to be "success or failure"
Dec 24 12:53:24.770: INFO: Pod "pod-configmaps-5f50d253-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.495299ms
Dec 24 12:53:27.199: INFO: Pod "pod-configmaps-5f50d253-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443402328s
Dec 24 12:53:29.218: INFO: Pod "pod-configmaps-5f50d253-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.46259619s
Dec 24 12:53:31.445: INFO: Pod "pod-configmaps-5f50d253-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.68914894s
Dec 24 12:53:33.466: INFO: Pod "pod-configmaps-5f50d253-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.710096944s
Dec 24 12:53:36.634: INFO: Pod "pod-configmaps-5f50d253-264c-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.877917213s
STEP: Saw pod success
Dec 24 12:53:36.634: INFO: Pod "pod-configmaps-5f50d253-264c-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:53:36.656: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5f50d253-264c-11ea-b7c4-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 24 12:53:36.870: INFO: Waiting for pod pod-configmaps-5f50d253-264c-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:53:36.887: INFO: Pod pod-configmaps-5f50d253-264c-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:53:36.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h6jlh" for this suite.
Dec 24 12:53:43.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:53:43.184: INFO: namespace: e2e-tests-configmap-h6jlh, resource: bindings, ignored listing per whitelist
Dec 24 12:53:43.204: INFO: namespace e2e-tests-configmap-h6jlh deletion completed in 6.308805866s

• [SLOW TEST:18.776 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:53:43.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-q4gpm.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-q4gpm.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-q4gpm.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-q4gpm.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-q4gpm.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-q4gpm.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 24 12:54:01.616: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005)
Dec 24 12:54:01.624: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005)
Dec 24 12:54:01.635: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005)
Dec 24 12:54:01.645: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005)
Dec 24 12:54:01.651: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005)
Dec 24 12:54:01.656: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005)
Dec 24 12:54:01.661: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-q4gpm.svc.cluster.local from pod e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005)
Dec 24 12:54:01.669: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005)
Dec 24 12:54:01.675: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005)
Dec 24 12:54:01.680: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005)
Dec 24 12:54:01.680: INFO: Lookups using e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-q4gpm.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 24 12:54:06.816: INFO: DNS probes using e2e-tests-dns-q4gpm/dns-test-6a77ee79-264c-11ea-b7c4-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:54:07.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-q4gpm" for this suite.
Dec 24 12:54:15.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:54:15.270: INFO: namespace: e2e-tests-dns-q4gpm, resource: bindings, ignored listing per whitelist
Dec 24 12:54:15.376: INFO: namespace e2e-tests-dns-q4gpm deletion completed in 8.329799953s

• [SLOW TEST:32.172 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:54:15.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 12:54:15.599: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.652543ms)
Dec 24 12:54:15.606: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.631201ms)
Dec 24 12:54:15.614: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.937764ms)
Dec 24 12:54:15.670: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 55.455803ms)
Dec 24 12:54:15.680: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.68471ms)
Dec 24 12:54:15.685: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.924643ms)
Dec 24 12:54:15.690: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.313713ms)
Dec 24 12:54:15.694: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.035958ms)
Dec 24 12:54:15.698: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.718701ms)
Dec 24 12:54:15.711: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.667575ms)
Dec 24 12:54:15.719: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.696929ms)
Dec 24 12:54:15.725: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.982477ms)
Dec 24 12:54:15.735: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.363443ms)
Dec 24 12:54:15.741: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.113136ms)
Dec 24 12:54:15.750: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.806254ms)
Dec 24 12:54:15.767: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.752236ms)
Dec 24 12:54:15.811: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 43.343862ms)
Dec 24 12:54:15.823: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.182915ms)
Dec 24 12:54:15.828: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.653439ms)
Dec 24 12:54:15.834: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.375094ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:54:15.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-k7vqf" for this suite.
Dec 24 12:54:21.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:54:22.024: INFO: namespace: e2e-tests-proxy-k7vqf, resource: bindings, ignored listing per whitelist
Dec 24 12:54:22.730: INFO: namespace e2e-tests-proxy-k7vqf deletion completed in 6.888723304s

• [SLOW TEST:7.354 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:54:22.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 12:54:23.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-hc9hm" to be "success or failure"
Dec 24 12:54:23.237: INFO: Pod "downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.823621ms
Dec 24 12:54:25.255: INFO: Pod "downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04369438s
Dec 24 12:54:27.285: INFO: Pod "downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073105833s
Dec 24 12:54:30.249: INFO: Pod "downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.037859087s
Dec 24 12:54:32.274: INFO: Pod "downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.06196572s
Dec 24 12:54:34.293: INFO: Pod "downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.081640214s
Dec 24 12:54:36.311: INFO: Pod "downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.099735852s
Dec 24 12:54:38.318: INFO: Pod "downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.10661121s
STEP: Saw pod success
Dec 24 12:54:38.318: INFO: Pod "downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:54:38.323: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 12:54:38.396: INFO: Waiting for pod downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:54:40.977: INFO: Pod downwardapi-volume-82314ed5-264c-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:54:40.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hc9hm" for this suite.
Dec 24 12:54:51.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:54:52.069: INFO: namespace: e2e-tests-downward-api-hc9hm, resource: bindings, ignored listing per whitelist
Dec 24 12:54:52.072: INFO: namespace e2e-tests-downward-api-hc9hm deletion completed in 10.466409516s

• [SLOW TEST:29.341 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:54:52.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 12:54:52.393: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-dc998" to be "success or failure"
Dec 24 12:54:52.422: INFO: Pod "downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.515704ms
Dec 24 12:54:57.019: INFO: Pod "downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.626046932s
Dec 24 12:54:59.885: INFO: Pod "downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.492388491s
Dec 24 12:55:01.944: INFO: Pod "downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.551158928s
Dec 24 12:55:03.967: INFO: Pod "downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.573440101s
Dec 24 12:55:05.995: INFO: Pod "downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.601761815s
Dec 24 12:55:08.068: INFO: Pod "downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.675003932s
Dec 24 12:55:10.087: INFO: Pod "downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.694122583s
STEP: Saw pod success
Dec 24 12:55:10.087: INFO: Pod "downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:55:10.099: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 12:55:10.202: INFO: Waiting for pod downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:55:10.229: INFO: Pod downwardapi-volume-9396acc3-264c-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:55:10.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dc998" for this suite.
Dec 24 12:55:16.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:55:16.607: INFO: namespace: e2e-tests-projected-dc998, resource: bindings, ignored listing per whitelist
Dec 24 12:55:16.804: INFO: namespace e2e-tests-projected-dc998 deletion completed in 6.553219649s

• [SLOW TEST:24.732 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:55:16.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 24 12:55:49.618: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bzz28 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 12:55:49.618: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 12:55:50.036: INFO: Exec stderr: ""
Dec 24 12:55:50.037: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bzz28 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 12:55:50.037: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 12:55:50.773: INFO: Exec stderr: ""
Dec 24 12:55:50.773: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bzz28 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 12:55:50.773: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 12:55:51.136: INFO: Exec stderr: ""
Dec 24 12:55:51.137: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bzz28 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 12:55:51.137: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 12:55:51.444: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 24 12:55:51.444: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bzz28 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 12:55:51.444: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 12:55:51.722: INFO: Exec stderr: ""
Dec 24 12:55:51.723: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bzz28 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 12:55:51.723: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 12:55:52.020: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 24 12:55:52.021: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bzz28 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 12:55:52.021: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 12:55:52.333: INFO: Exec stderr: ""
Dec 24 12:55:52.333: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bzz28 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 12:55:52.333: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 12:55:52.787: INFO: Exec stderr: ""
Dec 24 12:55:52.787: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bzz28 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 12:55:52.787: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 12:55:53.207: INFO: Exec stderr: ""
Dec 24 12:55:53.207: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bzz28 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 12:55:53.207: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 12:55:53.515: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:55:53.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-bzz28" for this suite.
Dec 24 12:56:45.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:56:45.607: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-bzz28, resource: bindings, ignored listing per whitelist
Dec 24 12:56:45.764: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-bzz28 deletion completed in 52.23542583s

• [SLOW TEST:88.958 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:56:45.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1224 12:57:03.979978       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 12:57:03.980: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:57:03.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-l6lqm" for this suite.
Dec 24 12:57:31.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:57:31.753: INFO: namespace: e2e-tests-gc-l6lqm, resource: bindings, ignored listing per whitelist
Dec 24 12:57:31.758: INFO: namespace e2e-tests-gc-l6lqm deletion completed in 27.727753251s

• [SLOW TEST:45.994 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:57:31.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 24 12:57:32.207: INFO: Waiting up to 5m0s for pod "downward-api-f2c34e41-264c-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-b48mg" to be "success or failure"
Dec 24 12:57:32.391: INFO: Pod "downward-api-f2c34e41-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 184.731583ms
Dec 24 12:57:34.776: INFO: Pod "downward-api-f2c34e41-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.56895234s
Dec 24 12:57:36.793: INFO: Pod "downward-api-f2c34e41-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.586789953s
Dec 24 12:57:38.806: INFO: Pod "downward-api-f2c34e41-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.599167413s
Dec 24 12:57:40.942: INFO: Pod "downward-api-f2c34e41-264c-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.735773251s
Dec 24 12:57:42.976: INFO: Pod "downward-api-f2c34e41-264c-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.769641781s
STEP: Saw pod success
Dec 24 12:57:42.976: INFO: Pod "downward-api-f2c34e41-264c-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 12:57:43.000: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f2c34e41-264c-11ea-b7c4-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 24 12:57:43.310: INFO: Waiting for pod downward-api-f2c34e41-264c-11ea-b7c4-0242ac110005 to disappear
Dec 24 12:57:43.322: INFO: Pod downward-api-f2c34e41-264c-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:57:43.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-b48mg" for this suite.
Dec 24 12:57:49.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:57:49.614: INFO: namespace: e2e-tests-downward-api-b48mg, resource: bindings, ignored listing per whitelist
Dec 24 12:57:49.627: INFO: namespace e2e-tests-downward-api-b48mg deletion completed in 6.294815856s

• [SLOW TEST:17.869 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:57:49.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Dec 24 12:57:50.491: INFO: Waiting up to 5m0s for pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2" in namespace "e2e-tests-svcaccounts-fsjt6" to be "success or failure"
Dec 24 12:57:50.603: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2": Phase="Pending", Reason="", readiness=false. Elapsed: 111.123616ms
Dec 24 12:57:52.642: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150777828s
Dec 24 12:57:54.679: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187589864s
Dec 24 12:57:56.705: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213207178s
Dec 24 12:57:58.736: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.244090805s
Dec 24 12:58:01.075: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.583423869s
Dec 24 12:58:03.500: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.008258987s
Dec 24 12:58:05.510: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.018654853s
Dec 24 12:58:07.569: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.077584498s
STEP: Saw pod success
Dec 24 12:58:07.569: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2" satisfied condition "success or failure"
Dec 24 12:58:07.581: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2 container token-test: 
STEP: delete the pod
Dec 24 12:58:08.110: INFO: Waiting for pod pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2 to disappear
Dec 24 12:58:08.149: INFO: Pod pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-5hpl2 no longer exists
STEP: Creating a pod to test consume service account root CA
Dec 24 12:58:08.283: INFO: Waiting up to 5m0s for pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8" in namespace "e2e-tests-svcaccounts-fsjt6" to be "success or failure"
Dec 24 12:58:08.354: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 70.547355ms
Dec 24 12:58:10.515: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232286423s
Dec 24 12:58:12.544: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261323312s
Dec 24 12:58:14.609: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.325789862s
Dec 24 12:58:16.630: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.347322678s
Dec 24 12:58:18.686: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.403017999s
Dec 24 12:58:20.699: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.415556886s
Dec 24 12:58:22.718: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.434905983s
Dec 24 12:58:25.213: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.930441183s
STEP: Saw pod success
Dec 24 12:58:25.214: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8" satisfied condition "success or failure"
Dec 24 12:58:25.225: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8 container root-ca-test: 
STEP: delete the pod
Dec 24 12:58:25.417: INFO: Waiting for pod pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8 to disappear
Dec 24 12:58:25.439: INFO: Pod pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-2b9c8 no longer exists
STEP: Creating a pod to test consume service account namespace
Dec 24 12:58:25.498: INFO: Waiting up to 5m0s for pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895" in namespace "e2e-tests-svcaccounts-fsjt6" to be "success or failure"
Dec 24 12:58:25.576: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895": Phase="Pending", Reason="", readiness=false. Elapsed: 77.969054ms
Dec 24 12:58:27.593: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094291902s
Dec 24 12:58:29.645: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146169728s
Dec 24 12:58:31.850: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895": Phase="Pending", Reason="", readiness=false. Elapsed: 6.351464462s
Dec 24 12:58:33.862: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363086778s
Dec 24 12:58:35.977: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895": Phase="Pending", Reason="", readiness=false. Elapsed: 10.47857574s
Dec 24 12:58:38.000: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895": Phase="Pending", Reason="", readiness=false. Elapsed: 12.501903032s
Dec 24 12:58:40.016: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895": Phase="Pending", Reason="", readiness=false. Elapsed: 14.517111879s
Dec 24 12:58:42.044: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.545586701s
STEP: Saw pod success
Dec 24 12:58:42.044: INFO: Pod "pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895" satisfied condition "success or failure"
Dec 24 12:58:42.052: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895 container namespace-test: 
STEP: delete the pod
Dec 24 12:58:43.552: INFO: Waiting for pod pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895 to disappear
Dec 24 12:58:43.808: INFO: Pod pod-service-account-fdb9fc43-264c-11ea-b7c4-0242ac110005-nf895 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:58:43.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-fsjt6" for this suite.
Dec 24 12:58:52.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:58:52.064: INFO: namespace: e2e-tests-svcaccounts-fsjt6, resource: bindings, ignored listing per whitelist
Dec 24 12:58:52.149: INFO: namespace e2e-tests-svcaccounts-fsjt6 deletion completed in 8.281509363s

• [SLOW TEST:62.521 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:58:52.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 12:58:52.647: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 24 12:58:52.685: INFO: Number of nodes with available pods: 0
Dec 24 12:58:52.685: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 24 12:58:52.848: INFO: Number of nodes with available pods: 0
Dec 24 12:58:52.848: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:58:54.237: INFO: Number of nodes with available pods: 0
Dec 24 12:58:54.237: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:58:54.960: INFO: Number of nodes with available pods: 0
Dec 24 12:58:54.960: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:58:55.882: INFO: Number of nodes with available pods: 0
Dec 24 12:58:55.882: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:58:56.869: INFO: Number of nodes with available pods: 0
Dec 24 12:58:56.869: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:58:57.874: INFO: Number of nodes with available pods: 0
Dec 24 12:58:57.874: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:58:59.357: INFO: Number of nodes with available pods: 0
Dec 24 12:58:59.357: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:58:59.880: INFO: Number of nodes with available pods: 0
Dec 24 12:58:59.880: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:00.866: INFO: Number of nodes with available pods: 0
Dec 24 12:59:00.866: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:01.887: INFO: Number of nodes with available pods: 0
Dec 24 12:59:01.887: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:02.874: INFO: Number of nodes with available pods: 1
Dec 24 12:59:02.874: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 24 12:59:02.957: INFO: Number of nodes with available pods: 1
Dec 24 12:59:02.957: INFO: Number of running nodes: 0, number of available pods: 1
Dec 24 12:59:03.985: INFO: Number of nodes with available pods: 0
Dec 24 12:59:03.985: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 24 12:59:04.100: INFO: Number of nodes with available pods: 0
Dec 24 12:59:04.100: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:05.444: INFO: Number of nodes with available pods: 0
Dec 24 12:59:05.444: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:06.126: INFO: Number of nodes with available pods: 0
Dec 24 12:59:06.127: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:08.072: INFO: Number of nodes with available pods: 0
Dec 24 12:59:08.072: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:08.448: INFO: Number of nodes with available pods: 0
Dec 24 12:59:08.448: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:09.113: INFO: Number of nodes with available pods: 0
Dec 24 12:59:09.114: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:10.173: INFO: Number of nodes with available pods: 0
Dec 24 12:59:10.173: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:11.121: INFO: Number of nodes with available pods: 0
Dec 24 12:59:11.121: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:12.262: INFO: Number of nodes with available pods: 0
Dec 24 12:59:12.262: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:13.120: INFO: Number of nodes with available pods: 0
Dec 24 12:59:13.120: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:14.126: INFO: Number of nodes with available pods: 0
Dec 24 12:59:14.126: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:15.146: INFO: Number of nodes with available pods: 0
Dec 24 12:59:15.147: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:16.779: INFO: Number of nodes with available pods: 0
Dec 24 12:59:16.779: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:17.127: INFO: Number of nodes with available pods: 0
Dec 24 12:59:17.127: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:18.129: INFO: Number of nodes with available pods: 0
Dec 24 12:59:18.129: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:19.149: INFO: Number of nodes with available pods: 0
Dec 24 12:59:19.150: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:20.121: INFO: Number of nodes with available pods: 0
Dec 24 12:59:20.122: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 12:59:21.117: INFO: Number of nodes with available pods: 1
Dec 24 12:59:21.117: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6xfn9, will wait for the garbage collector to delete the pods
Dec 24 12:59:21.244: INFO: Deleting DaemonSet.extensions daemon-set took: 56.47904ms
Dec 24 12:59:21.344: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.636182ms
Dec 24 12:59:32.897: INFO: Number of nodes with available pods: 0
Dec 24 12:59:32.897: INFO: Number of running nodes: 0, number of available pods: 0
Dec 24 12:59:32.903: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6xfn9/daemonsets","resourceVersion":"15906955"},"items":null}

Dec 24 12:59:32.907: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6xfn9/pods","resourceVersion":"15906955"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:59:32.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-6xfn9" for this suite.
Dec 24 12:59:39.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 12:59:39.255: INFO: namespace: e2e-tests-daemonsets-6xfn9, resource: bindings, ignored listing per whitelist
Dec 24 12:59:39.429: INFO: namespace e2e-tests-daemonsets-6xfn9 deletion completed in 6.456888216s

• [SLOW TEST:47.280 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 12:59:39.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 24 12:59:50.468: INFO: Successfully updated pod "labelsupdate3ee214c4-264d-11ea-b7c4-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 12:59:52.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-btmmh" for this suite.
Dec 24 13:00:16.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:00:16.740: INFO: namespace: e2e-tests-projected-btmmh, resource: bindings, ignored listing per whitelist
Dec 24 13:00:16.939: INFO: namespace e2e-tests-projected-btmmh deletion completed in 24.295461818s

• [SLOW TEST:37.511 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:00:16.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-gv6x
STEP: Creating a pod to test atomic-volume-subpath
Dec 24 13:00:17.256: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gv6x" in namespace "e2e-tests-subpath-mg5kp" to be "success or failure"
Dec 24 13:00:17.378: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Pending", Reason="", readiness=false. Elapsed: 121.051428ms
Dec 24 13:00:19.725: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.46868264s
Dec 24 13:00:21.747: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490508211s
Dec 24 13:00:23.765: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.508628648s
Dec 24 13:00:25.886: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.629267405s
Dec 24 13:00:28.390: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Pending", Reason="", readiness=false. Elapsed: 11.133082435s
Dec 24 13:00:30.894: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Pending", Reason="", readiness=false. Elapsed: 13.63773024s
Dec 24 13:00:32.909: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Pending", Reason="", readiness=false. Elapsed: 15.652342404s
Dec 24 13:00:35.009: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Pending", Reason="", readiness=false. Elapsed: 17.75255363s
Dec 24 13:00:37.020: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Pending", Reason="", readiness=false. Elapsed: 19.763434717s
Dec 24 13:00:39.049: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Pending", Reason="", readiness=false. Elapsed: 21.792437386s
Dec 24 13:00:41.062: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Running", Reason="", readiness=false. Elapsed: 23.805778004s
Dec 24 13:00:43.089: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Running", Reason="", readiness=false. Elapsed: 25.832500011s
Dec 24 13:00:45.196: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Running", Reason="", readiness=false. Elapsed: 27.939662633s
Dec 24 13:00:47.215: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Running", Reason="", readiness=false. Elapsed: 29.95852142s
Dec 24 13:00:49.273: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Running", Reason="", readiness=false. Elapsed: 32.016630245s
Dec 24 13:00:51.292: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Running", Reason="", readiness=false. Elapsed: 34.035032232s
Dec 24 13:00:53.318: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Running", Reason="", readiness=false. Elapsed: 36.061461589s
Dec 24 13:00:55.384: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Running", Reason="", readiness=false. Elapsed: 38.126925425s
Dec 24 13:00:57.406: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Running", Reason="", readiness=false. Elapsed: 40.148809979s
Dec 24 13:00:59.427: INFO: Pod "pod-subpath-test-secret-gv6x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.170296795s
STEP: Saw pod success
Dec 24 13:00:59.427: INFO: Pod "pod-subpath-test-secret-gv6x" satisfied condition "success or failure"
Dec 24 13:00:59.435: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-gv6x container test-container-subpath-secret-gv6x: 
STEP: delete the pod
Dec 24 13:01:02.683: INFO: Waiting for pod pod-subpath-test-secret-gv6x to disappear
Dec 24 13:01:03.113: INFO: Pod pod-subpath-test-secret-gv6x no longer exists
STEP: Deleting pod pod-subpath-test-secret-gv6x
Dec 24 13:01:03.113: INFO: Deleting pod "pod-subpath-test-secret-gv6x" in namespace "e2e-tests-subpath-mg5kp"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:01:03.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-mg5kp" for this suite.
Dec 24 13:01:11.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:01:11.700: INFO: namespace: e2e-tests-subpath-mg5kp, resource: bindings, ignored listing per whitelist
Dec 24 13:01:11.732: INFO: namespace e2e-tests-subpath-mg5kp deletion completed in 8.446615161s

• [SLOW TEST:54.792 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:01:11.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-75facac3-264d-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 13:01:12.342: INFO: Waiting up to 5m0s for pod "pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-configmap-5bbvc" to be "success or failure"
Dec 24 13:01:12.356: INFO: Pod "pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.299663ms
Dec 24 13:01:14.802: INFO: Pod "pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.459847348s
Dec 24 13:01:17.055: INFO: Pod "pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.713357856s
Dec 24 13:01:19.076: INFO: Pod "pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.734722832s
Dec 24 13:01:21.550: INFO: Pod "pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.207973643s
Dec 24 13:01:23.569: INFO: Pod "pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.227151526s
Dec 24 13:01:25.581: INFO: Pod "pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.239578381s
STEP: Saw pod success
Dec 24 13:01:25.581: INFO: Pod "pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:01:25.587: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 24 13:01:25.672: INFO: Waiting for pod pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:01:25.689: INFO: Pod pod-configmaps-75fe4927-264d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:01:25.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-5bbvc" for this suite.
Dec 24 13:01:33.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:01:33.990: INFO: namespace: e2e-tests-configmap-5bbvc, resource: bindings, ignored listing per whitelist
Dec 24 13:01:34.056: INFO: namespace e2e-tests-configmap-5bbvc deletion completed in 8.276550112s

• [SLOW TEST:22.322 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:01:34.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-832d12a7-264d-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 13:01:34.588: INFO: Waiting up to 5m0s for pod "pod-secrets-834ba234-264d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-secrets-9mvl5" to be "success or failure"
Dec 24 13:01:34.615: INFO: Pod "pod-secrets-834ba234-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.387349ms
Dec 24 13:01:36.640: INFO: Pod "pod-secrets-834ba234-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05197692s
Dec 24 13:01:38.666: INFO: Pod "pod-secrets-834ba234-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0782986s
Dec 24 13:01:40.919: INFO: Pod "pod-secrets-834ba234-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.331288531s
Dec 24 13:01:42.935: INFO: Pod "pod-secrets-834ba234-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.347218569s
Dec 24 13:01:45.030: INFO: Pod "pod-secrets-834ba234-264d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.442002755s
STEP: Saw pod success
Dec 24 13:01:45.030: INFO: Pod "pod-secrets-834ba234-264d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:01:45.068: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-834ba234-264d-11ea-b7c4-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 24 13:01:45.369: INFO: Waiting for pod pod-secrets-834ba234-264d-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:01:45.387: INFO: Pod pod-secrets-834ba234-264d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:01:45.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9mvl5" for this suite.
Dec 24 13:01:51.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:01:51.626: INFO: namespace: e2e-tests-secrets-9mvl5, resource: bindings, ignored listing per whitelist
Dec 24 13:01:51.656: INFO: namespace e2e-tests-secrets-9mvl5 deletion completed in 6.244290413s

• [SLOW TEST:17.600 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:01:51.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 13:01:51.902: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-r78pq" to be "success or failure"
Dec 24 13:01:51.922: INFO: Pod "downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.934865ms
Dec 24 13:01:54.805: INFO: Pod "downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.902750386s
Dec 24 13:01:56.826: INFO: Pod "downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.924352636s
Dec 24 13:01:58.916: INFO: Pod "downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.013910869s
Dec 24 13:02:00.933: INFO: Pod "downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.031181808s
Dec 24 13:02:02.954: INFO: Pod "downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.052481882s
Dec 24 13:02:05.005: INFO: Pod "downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.103073637s
STEP: Saw pod success
Dec 24 13:02:05.005: INFO: Pod "downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:02:05.012: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 13:02:05.663: INFO: Waiting for pod downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:02:05.669: INFO: Pod downwardapi-volume-8da29793-264d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:02:05.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-r78pq" for this suite.
Dec 24 13:02:13.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:02:14.283: INFO: namespace: e2e-tests-downward-api-r78pq, resource: bindings, ignored listing per whitelist
Dec 24 13:02:14.318: INFO: namespace e2e-tests-downward-api-r78pq deletion completed in 8.639910305s

• [SLOW TEST:22.661 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:02:14.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 24 13:02:14.796: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:02:31.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-w84wj" for this suite.
Dec 24 13:02:38.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:02:38.210: INFO: namespace: e2e-tests-init-container-w84wj, resource: bindings, ignored listing per whitelist
Dec 24 13:02:38.291: INFO: namespace e2e-tests-init-container-w84wj deletion completed in 6.346267335s

• [SLOW TEST:23.973 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:02:38.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 24 13:02:38.590: INFO: Waiting up to 5m0s for pod "pod-a973bd13-264d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-28rh2" to be "success or failure"
Dec 24 13:02:38.633: INFO: Pod "pod-a973bd13-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.205204ms
Dec 24 13:02:40.835: INFO: Pod "pod-a973bd13-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244308228s
Dec 24 13:02:42.851: INFO: Pod "pod-a973bd13-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260814374s
Dec 24 13:02:46.220: INFO: Pod "pod-a973bd13-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.629135262s
Dec 24 13:02:48.234: INFO: Pod "pod-a973bd13-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.643221846s
Dec 24 13:02:50.252: INFO: Pod "pod-a973bd13-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.661411387s
Dec 24 13:02:52.273: INFO: Pod "pod-a973bd13-264d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.682756805s
STEP: Saw pod success
Dec 24 13:02:52.273: INFO: Pod "pod-a973bd13-264d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:02:52.281: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a973bd13-264d-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 13:02:52.591: INFO: Waiting for pod pod-a973bd13-264d-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:02:53.768: INFO: Pod pod-a973bd13-264d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:02:53.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-28rh2" for this suite.
Dec 24 13:03:02.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:03:02.782: INFO: namespace: e2e-tests-emptydir-28rh2, resource: bindings, ignored listing per whitelist
Dec 24 13:03:02.894: INFO: namespace e2e-tests-emptydir-28rh2 deletion completed in 9.108186889s

• [SLOW TEST:24.602 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:03:02.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 13:03:03.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-downward-api-mpp92" to be "success or failure"
Dec 24 13:03:03.517: INFO: Pod "downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 176.817475ms
Dec 24 13:03:05.824: INFO: Pod "downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.484107161s
Dec 24 13:03:07.864: INFO: Pod "downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.524105139s
Dec 24 13:03:10.111: INFO: Pod "downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.770969138s
Dec 24 13:03:12.150: INFO: Pod "downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.810062197s
Dec 24 13:03:14.313: INFO: Pod "downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.973167339s
Dec 24 13:03:16.324: INFO: Pod "downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.983965743s
STEP: Saw pod success
Dec 24 13:03:16.324: INFO: Pod "downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:03:16.331: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 13:03:17.309: INFO: Waiting for pod downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:03:17.801: INFO: Pod downwardapi-volume-b833786d-264d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:03:17.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mpp92" for this suite.
Dec 24 13:03:24.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:03:24.277: INFO: namespace: e2e-tests-downward-api-mpp92, resource: bindings, ignored listing per whitelist
Dec 24 13:03:24.399: INFO: namespace e2e-tests-downward-api-mpp92 deletion completed in 6.349433281s

• [SLOW TEST:21.505 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:03:24.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-c4fb1825-264d-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 13:03:24.752: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-configmap-nbj4f" to be "success or failure"
Dec 24 13:03:24.760: INFO: Pod "pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172018ms
Dec 24 13:03:27.061: INFO: Pod "pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308962285s
Dec 24 13:03:29.074: INFO: Pod "pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322090593s
Dec 24 13:03:31.100: INFO: Pod "pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.34757493s
Dec 24 13:03:33.628: INFO: Pod "pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.875648751s
Dec 24 13:03:35.953: INFO: Pod "pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.200577539s
Dec 24 13:03:37.967: INFO: Pod "pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.215388753s
STEP: Saw pod success
Dec 24 13:03:37.968: INFO: Pod "pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:03:37.972: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 24 13:03:38.409: INFO: Waiting for pod pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:03:38.894: INFO: Pod pod-configmaps-c4fc3dbb-264d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:03:38.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-nbj4f" for this suite.
Dec 24 13:03:44.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:03:45.171: INFO: namespace: e2e-tests-configmap-nbj4f, resource: bindings, ignored listing per whitelist
Dec 24 13:03:45.283: INFO: namespace e2e-tests-configmap-nbj4f deletion completed in 6.372108762s

• [SLOW TEST:20.884 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:03:45.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 13:03:45.749: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-l7bpg" to be "success or failure"
Dec 24 13:03:45.798: INFO: Pod "downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.961934ms
Dec 24 13:03:48.082: INFO: Pod "downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332377275s
Dec 24 13:03:50.145: INFO: Pod "downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.395551812s
Dec 24 13:03:52.940: INFO: Pod "downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.190502113s
Dec 24 13:03:55.011: INFO: Pod "downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.261796927s
Dec 24 13:03:57.029: INFO: Pod "downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.279948922s
Dec 24 13:03:59.058: INFO: Pod "downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.308666181s
Dec 24 13:04:02.498: INFO: Pod "downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.749194619s
STEP: Saw pod success
Dec 24 13:04:02.499: INFO: Pod "downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:04:02.527: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 13:04:02.998: INFO: Waiting for pod downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:04:03.026: INFO: Pod downwardapi-volume-d1652db7-264d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:04:03.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l7bpg" for this suite.
Dec 24 13:04:09.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:04:09.252: INFO: namespace: e2e-tests-projected-l7bpg, resource: bindings, ignored listing per whitelist
Dec 24 13:04:09.268: INFO: namespace e2e-tests-projected-l7bpg deletion completed in 6.185315507s

• [SLOW TEST:23.984 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:04:09.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-df9dd8d9-264d-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 13:04:09.511: INFO: Waiting up to 5m0s for pod "pod-configmaps-dfa073f6-264d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-configmap-glwbr" to be "success or failure"
Dec 24 13:04:09.520: INFO: Pod "pod-configmaps-dfa073f6-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.637679ms
Dec 24 13:04:11.642: INFO: Pod "pod-configmaps-dfa073f6-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130568477s
Dec 24 13:04:13.671: INFO: Pod "pod-configmaps-dfa073f6-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159905199s
Dec 24 13:04:15.701: INFO: Pod "pod-configmaps-dfa073f6-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189451083s
Dec 24 13:04:17.721: INFO: Pod "pod-configmaps-dfa073f6-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.209412866s
Dec 24 13:04:20.396: INFO: Pod "pod-configmaps-dfa073f6-264d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.884660956s
STEP: Saw pod success
Dec 24 13:04:20.396: INFO: Pod "pod-configmaps-dfa073f6-264d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:04:20.798: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-dfa073f6-264d-11ea-b7c4-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 24 13:04:21.021: INFO: Waiting for pod pod-configmaps-dfa073f6-264d-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:04:21.029: INFO: Pod pod-configmaps-dfa073f6-264d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:04:21.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-glwbr" for this suite.
Dec 24 13:04:29.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:04:29.424: INFO: namespace: e2e-tests-configmap-glwbr, resource: bindings, ignored listing per whitelist
Dec 24 13:04:29.431: INFO: namespace e2e-tests-configmap-glwbr deletion completed in 8.385286677s

• [SLOW TEST:20.161 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:04:29.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 24 13:04:29.631: INFO: Waiting up to 5m0s for pod "pod-eba5d75d-264d-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-4w5pk" to be "success or failure"
Dec 24 13:04:29.644: INFO: Pod "pod-eba5d75d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.05838ms
Dec 24 13:04:31.696: INFO: Pod "pod-eba5d75d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06510588s
Dec 24 13:04:34.102: INFO: Pod "pod-eba5d75d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471137461s
Dec 24 13:04:36.119: INFO: Pod "pod-eba5d75d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.48865403s
Dec 24 13:04:38.385: INFO: Pod "pod-eba5d75d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.754628829s
Dec 24 13:04:40.402: INFO: Pod "pod-eba5d75d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.771287661s
Dec 24 13:04:42.417: INFO: Pod "pod-eba5d75d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.786001145s
Dec 24 13:04:44.429: INFO: Pod "pod-eba5d75d-264d-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.798474504s
Dec 24 13:04:47.008: INFO: Pod "pod-eba5d75d-264d-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.376790645s
STEP: Saw pod success
Dec 24 13:04:47.008: INFO: Pod "pod-eba5d75d-264d-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:04:47.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-eba5d75d-264d-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 13:04:47.322: INFO: Waiting for pod pod-eba5d75d-264d-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:04:47.353: INFO: Pod pod-eba5d75d-264d-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:04:47.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4w5pk" for this suite.
Dec 24 13:04:53.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:04:53.572: INFO: namespace: e2e-tests-emptydir-4w5pk, resource: bindings, ignored listing per whitelist
Dec 24 13:04:53.659: INFO: namespace e2e-tests-emptydir-4w5pk deletion completed in 6.287214085s

• [SLOW TEST:24.228 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:04:53.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Dec 24 13:04:53.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 24 13:04:55.833: INFO: stderr: ""
Dec 24 13:04:55.833: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:04:55.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kg2vv" for this suite.
Dec 24 13:05:01.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:05:02.094: INFO: namespace: e2e-tests-kubectl-kg2vv, resource: bindings, ignored listing per whitelist
Dec 24 13:05:02.208: INFO: namespace e2e-tests-kubectl-kg2vv deletion completed in 6.347607779s

• [SLOW TEST:8.548 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:05:02.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 13:05:02.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-62gnr'
Dec 24 13:05:03.011: INFO: stderr: ""
Dec 24 13:05:03.011: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Dec 24 13:05:03.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-62gnr'
Dec 24 13:05:11.750: INFO: stderr: ""
Dec 24 13:05:11.750: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:05:11.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-62gnr" for this suite.
Dec 24 13:05:19.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:05:20.341: INFO: namespace: e2e-tests-kubectl-62gnr, resource: bindings, ignored listing per whitelist
Dec 24 13:05:20.376: INFO: namespace e2e-tests-kubectl-62gnr deletion completed in 8.480086915s

• [SLOW TEST:18.167 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:05:20.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:05:37.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-dzgf9" for this suite.
Dec 24 13:06:37.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:06:37.356: INFO: namespace: e2e-tests-kubelet-test-dzgf9, resource: bindings, ignored listing per whitelist
Dec 24 13:06:37.429: INFO: namespace e2e-tests-kubelet-test-dzgf9 deletion completed in 1m0.321411634s

• [SLOW TEST:77.053 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:06:37.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 13:06:37.768: INFO: Creating deployment "nginx-deployment"
Dec 24 13:06:37.780: INFO: Waiting for observed generation 1
Dec 24 13:06:40.559: INFO: Waiting for all required pods to come up
Dec 24 13:06:42.466: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 24 13:07:41.673: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 24 13:07:41.747: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 24 13:07:41.766: INFO: Updating deployment nginx-deployment
Dec 24 13:07:41.766: INFO: Waiting for observed generation 2
Dec 24 13:07:45.460: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 24 13:07:45.947: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 24 13:07:46.717: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 24 13:07:48.187: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 24 13:07:48.187: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 24 13:07:48.211: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 24 13:07:48.222: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 24 13:07:48.222: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 24 13:07:48.912: INFO: Updating deployment nginx-deployment
Dec 24 13:07:48.912: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 24 13:07:48.983: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 24 13:07:52.784: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 24 13:07:53.985: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-9pznd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9pznd/deployments/nginx-deployment,UID:380ac1df-264e-11ea-a994-fa163e34d433,ResourceVersion:15908074,Generation:3,CreationTimestamp:2019-12-24 13:06:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-24 13:07:45 +0000 UTC 2019-12-24 13:06:38 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-24 13:07:51 +0000 UTC 2019-12-24 13:07:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 24 13:07:54.439: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-9pznd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9pznd/replicasets/nginx-deployment-5c98f8fb5,UID:5e31560a-264e-11ea-a994-fa163e34d433,ResourceVersion:15908126,Generation:3,CreationTimestamp:2019-12-24 13:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 380ac1df-264e-11ea-a994-fa163e34d433 0xc001f75587 0xc001f75588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 24 13:07:54.439: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 24 13:07:54.440: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-9pznd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9pznd/replicasets/nginx-deployment-85ddf47c5d,UID:38310e80-264e-11ea-a994-fa163e34d433,ResourceVersion:15908116,Generation:3,CreationTimestamp:2019-12-24 13:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 380ac1df-264e-11ea-a994-fa163e34d433 0xc001f75647 0xc001f75648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 24 13:07:54.734: INFO: Pod "nginx-deployment-5c98f8fb5-4xlq8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4xlq8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-4xlq8,UID:64c62e50-264e-11ea-a994-fa163e34d433,ResourceVersion:15908095,Generation:0,CreationTimestamp:2019-12-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc001dbdb87 0xc001dbdb88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dbdc30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dbddf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.735: INFO: Pod "nginx-deployment-5c98f8fb5-5m9dz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5m9dz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-5m9dz,UID:5e449e0b-264e-11ea-a994-fa163e34d433,ResourceVersion:15908056,Generation:0,CreationTimestamp:2019-12-24 13:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc002278017 0xc002278018}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002278080} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022780a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-24 13:07:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.736: INFO: Pod "nginx-deployment-5c98f8fb5-ch4pl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ch4pl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-ch4pl,UID:65312820-264e-11ea-a994-fa163e34d433,ResourceVersion:15908122,Generation:0,CreationTimestamp:2019-12-24 13:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc002279667 0xc002279668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022796d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022797f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.736: INFO: Pod "nginx-deployment-5c98f8fb5-f9lcg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f9lcg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-f9lcg,UID:5e94995d-264e-11ea-a994-fa163e34d433,ResourceVersion:15908064,Generation:0,CreationTimestamp:2019-12-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc002279867 0xc002279868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022798d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022798f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-24 13:07:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.737: INFO: Pod "nginx-deployment-5c98f8fb5-kp75w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kp75w,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-kp75w,UID:5e448e4f-264e-11ea-a994-fa163e34d433,ResourceVersion:15908046,Generation:0,CreationTimestamp:2019-12-24 13:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc002279cd7 0xc002279cd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002279d40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002279d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-24 13:07:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.737: INFO: Pod "nginx-deployment-5c98f8fb5-l6g6n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l6g6n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-l6g6n,UID:657ec670-264e-11ea-a994-fa163e34d433,ResourceVersion:15908125,Generation:0,CreationTimestamp:2019-12-24 13:07:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc002279f17 0xc002279f18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002279f80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020b0050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.737: INFO: Pod "nginx-deployment-5c98f8fb5-lwdqn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lwdqn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-lwdqn,UID:5e873c90-264e-11ea-a994-fa163e34d433,ResourceVersion:15908062,Generation:0,CreationTimestamp:2019-12-24 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc0020b0107 0xc0020b0108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020b06a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020b07f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-24 13:07:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.738: INFO: Pod "nginx-deployment-5c98f8fb5-p8mh6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p8mh6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-p8mh6,UID:65317b4b-264e-11ea-a994-fa163e34d433,ResourceVersion:15908118,Generation:0,CreationTimestamp:2019-12-24 13:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc0020b0907 0xc0020b0908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020b09f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020b0a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.738: INFO: Pod "nginx-deployment-5c98f8fb5-rb5lf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rb5lf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-rb5lf,UID:6530ef5b-264e-11ea-a994-fa163e34d433,ResourceVersion:15908114,Generation:0,CreationTimestamp:2019-12-24 13:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc0020b0b97 0xc0020b0b98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020b0c00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020b0dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.738: INFO: Pod "nginx-deployment-5c98f8fb5-rf74h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rf74h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-rf74h,UID:6531472e-264e-11ea-a994-fa163e34d433,ResourceVersion:15908119,Generation:0,CreationTimestamp:2019-12-24 13:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc0020b0ef7 0xc0020b0ef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020b1030} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020b1050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.739: INFO: Pod "nginx-deployment-5c98f8fb5-s7w9z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-s7w9z,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-s7w9z,UID:5e36ee66-264e-11ea-a994-fa163e34d433,ResourceVersion:15908024,Generation:0,CreationTimestamp:2019-12-24 13:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc0020b12e7 0xc0020b12e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020b1350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020b1370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-24 13:07:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.739: INFO: Pod "nginx-deployment-5c98f8fb5-xq9ds" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xq9ds,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-xq9ds,UID:64b5a2d1-264e-11ea-a994-fa163e34d433,ResourceVersion:15908080,Generation:0,CreationTimestamp:2019-12-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc0020b1517 0xc0020b1518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020b1580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020b15a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.740: INFO: Pod "nginx-deployment-5c98f8fb5-z6xxr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z6xxr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-5c98f8fb5-z6xxr,UID:64c6fbc8-264e-11ea-a994-fa163e34d433,ResourceVersion:15908097,Generation:0,CreationTimestamp:2019-12-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5e31560a-264e-11ea-a994-fa163e34d433 0xc0020b16e7 0xc0020b16e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020b1750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020b1770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.740: INFO: Pod "nginx-deployment-85ddf47c5d-5nrjd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5nrjd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-5nrjd,UID:38496690-264e-11ea-a994-fa163e34d433,ResourceVersion:15907975,Generation:0,CreationTimestamp:2019-12-24 13:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020b1887 0xc0020b1888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020b1a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020b1a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-24 13:06:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:07:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://723a2ae132c64a379392f7b4a1a9763ac3edcc79fabe98ddeb09dc892366265e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.740: INFO: Pod "nginx-deployment-85ddf47c5d-5xsn7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5xsn7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-5xsn7,UID:653179ab-264e-11ea-a994-fa163e34d433,ResourceVersion:15908117,Generation:0,CreationTimestamp:2019-12-24 13:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020b1b97 0xc0020b1b98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020b1d50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020b1d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.741: INFO: Pod "nginx-deployment-85ddf47c5d-8nbwh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8nbwh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-8nbwh,UID:388d6ada-264e-11ea-a994-fa163e34d433,ResourceVersion:15907990,Generation:0,CreationTimestamp:2019-12-24 13:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc001e49237 0xc001e49238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e492e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e49420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2019-12-24 13:06:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:07:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d19e0f7c003fdb00066620583fe0cb726ab629dc768ec398354109de1b83d19a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.741: INFO: Pod "nginx-deployment-85ddf47c5d-8p5k2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8p5k2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-8p5k2,UID:64ca0fd9-264e-11ea-a994-fa163e34d433,ResourceVersion:15908105,Generation:0,CreationTimestamp:2019-12-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0018ce1a7 0xc0018ce1a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018ce210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018ce2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.742: INFO: Pod "nginx-deployment-85ddf47c5d-8rvkz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8rvkz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-8rvkz,UID:652f2036-264e-11ea-a994-fa163e34d433,ResourceVersion:15908110,Generation:0,CreationTimestamp:2019-12-24 13:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0018ce3c7 0xc0018ce3c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018cefc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018cefe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.742: INFO: Pod "nginx-deployment-85ddf47c5d-9qfx2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9qfx2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-9qfx2,UID:65310a9e-264e-11ea-a994-fa163e34d433,ResourceVersion:15908113,Generation:0,CreationTimestamp:2019-12-24 13:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0018cf0f7 0xc0018cf0f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018cf680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018cf6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.742: INFO: Pod "nginx-deployment-85ddf47c5d-bqwl5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bqwl5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-bqwl5,UID:64b67566-264e-11ea-a994-fa163e34d433,ResourceVersion:15908082,Generation:0,CreationTimestamp:2019-12-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0018cf7d7 0xc0018cf7d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018cf840} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018cf860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.743: INFO: Pod "nginx-deployment-85ddf47c5d-gcltg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gcltg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-gcltg,UID:38515dfe-264e-11ea-a994-fa163e34d433,ResourceVersion:15907979,Generation:0,CreationTimestamp:2019-12-24 13:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0018cf947 0xc0018cf948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018cfa70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018cfa90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-24 13:06:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:07:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://829dcb4f7b69d0bacfda1e02e24d4559c391b94aa0520c951c2867725cf22699}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.743: INFO: Pod "nginx-deployment-85ddf47c5d-gcqfb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gcqfb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-gcqfb,UID:64c9cdba-264e-11ea-a994-fa163e34d433,ResourceVersion:15908096,Generation:0,CreationTimestamp:2019-12-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0018cfc57 0xc0018cfc58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018cfcc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018cfd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.744: INFO: Pod "nginx-deployment-85ddf47c5d-gf5gk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gf5gk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-gf5gk,UID:384984ab-264e-11ea-a994-fa163e34d433,ResourceVersion:15907965,Generation:0,CreationTimestamp:2019-12-24 13:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0018cfe07 0xc0018cfe08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018cff30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018cff50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-24 13:06:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:07:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://91b461f975d5430301157f55c7bc21cf125de213ded075895f48080e0239f9f2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.744: INFO: Pod "nginx-deployment-85ddf47c5d-gj9j5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gj9j5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-gj9j5,UID:64c96c4a-264e-11ea-a994-fa163e34d433,ResourceVersion:15908106,Generation:0,CreationTimestamp:2019-12-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020a6367 0xc0020a6368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020a63d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020a63f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.745: INFO: Pod "nginx-deployment-85ddf47c5d-jjwxq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jjwxq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-jjwxq,UID:6531f4af-264e-11ea-a994-fa163e34d433,ResourceVersion:15908120,Generation:0,CreationTimestamp:2019-12-24 13:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020a6587 0xc0020a6588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020a6660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020a6680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.745: INFO: Pod "nginx-deployment-85ddf47c5d-kd9h7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kd9h7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-kd9h7,UID:3847120c-264e-11ea-a994-fa163e34d433,ResourceVersion:15907972,Generation:0,CreationTimestamp:2019-12-24 13:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020a6777 0xc0020a6778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020a6a40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020a6a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:38 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-24 13:06:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:07:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c0ae501852ec31e0bd50f809b113f849ba685cc77fbf215f194c68e35ac60a00}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.746: INFO: Pod "nginx-deployment-85ddf47c5d-lj4xw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lj4xw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-lj4xw,UID:65313838-264e-11ea-a994-fa163e34d433,ResourceVersion:15908121,Generation:0,CreationTimestamp:2019-12-24 13:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020a6b27 0xc0020a6b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020a6b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020a6bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.746: INFO: Pod "nginx-deployment-85ddf47c5d-nhv2r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nhv2r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-nhv2r,UID:64ca1b8c-264e-11ea-a994-fa163e34d433,ResourceVersion:15908103,Generation:0,CreationTimestamp:2019-12-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020a6d27 0xc0020a6d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020a6d90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020a6db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.747: INFO: Pod "nginx-deployment-85ddf47c5d-nvb6h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nvb6h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-nvb6h,UID:388db62d-264e-11ea-a994-fa163e34d433,ResourceVersion:15907982,Generation:0,CreationTimestamp:2019-12-24 13:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020a6f57 0xc0020a6f58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020a6fc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020a6fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-24 13:06:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:07:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://62976291c6ae5da8df7f6a755d3dfca0d9d7fae006f44ca694fe013a5d9b7d94}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.747: INFO: Pod "nginx-deployment-85ddf47c5d-rrp9w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rrp9w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-rrp9w,UID:644d0ebc-264e-11ea-a994-fa163e34d433,ResourceVersion:15908123,Generation:0,CreationTimestamp:2019-12-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020a7117 0xc0020a7118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020a7180} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020a71a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-24 13:07:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.747: INFO: Pod "nginx-deployment-85ddf47c5d-sgkgw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sgkgw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-sgkgw,UID:64b65427-264e-11ea-a994-fa163e34d433,ResourceVersion:15908088,Generation:0,CreationTimestamp:2019-12-24 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020a7257 0xc0020a7258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020a7320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020a7340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.748: INFO: Pod "nginx-deployment-85ddf47c5d-wjxbm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wjxbm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-wjxbm,UID:38518c36-264e-11ea-a994-fa163e34d433,ResourceVersion:15907986,Generation:0,CreationTimestamp:2019-12-24 13:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020a73b7 0xc0020a73b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020a7420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020a7440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-24 13:06:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:07:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d5a8645a5717797620173fef2726bb8fbc97d67666bb47ce1c9f2fc400541823}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 13:07:54.748: INFO: Pod "nginx-deployment-85ddf47c5d-zc4w5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zc4w5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9pznd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9pznd/pods/nginx-deployment-85ddf47c5d-zc4w5,UID:3851619a-264e-11ea-a994-fa163e34d433,ResourceVersion:15907994,Generation:0,CreationTimestamp:2019-12-24 13:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 38310e80-264e-11ea-a994-fa163e34d433 0xc0020a7867 0xc0020a7868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wl2cz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wl2cz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wl2cz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020a78f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020a7b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:07:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:06:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-24 13:06:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:07:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0b9b36b877b03141bd53e1cd73fa4b9b06ef425ae0c2dcd6c98e2640991b0766}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:07:54.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-9pznd" for this suite.
Dec 24 13:09:04.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:09:05.119: INFO: namespace: e2e-tests-deployment-9pznd, resource: bindings, ignored listing per whitelist
Dec 24 13:09:05.788: INFO: namespace e2e-tests-deployment-9pznd deletion completed in 1m8.830376412s

• [SLOW TEST:148.357 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:09:05.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 24 13:09:07.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9cvrb'
Dec 24 13:09:07.544: INFO: stderr: ""
Dec 24 13:09:07.544: INFO: stdout: "pod/pause created\n"
Dec 24 13:09:07.544: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 24 13:09:07.544: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-9cvrb" to be "running and ready"
Dec 24 13:09:07.566: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.685743ms
Dec 24 13:09:10.537: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.992562467s
Dec 24 13:09:12.584: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.039742646s
Dec 24 13:09:14.911: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.366294557s
Dec 24 13:09:16.941: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.396197999s
Dec 24 13:09:18.966: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.42197895s
Dec 24 13:09:21.960: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.415111023s
Dec 24 13:09:23.976: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 16.431738277s
Dec 24 13:09:25.996: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 18.451219915s
Dec 24 13:09:28.484: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 20.939758007s
Dec 24 13:09:30.548: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 23.003856077s
Dec 24 13:09:32.662: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 25.117702056s
Dec 24 13:09:34.680: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 27.135564337s
Dec 24 13:09:34.680: INFO: Pod "pause" satisfied condition "running and ready"
Dec 24 13:09:34.680: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 24 13:09:34.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-9cvrb'
Dec 24 13:09:34.916: INFO: stderr: ""
Dec 24 13:09:34.916: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 24 13:09:34.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-9cvrb'
Dec 24 13:09:35.151: INFO: stderr: ""
Dec 24 13:09:35.151: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          28s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 24 13:09:35.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-9cvrb'
Dec 24 13:09:35.310: INFO: stderr: ""
Dec 24 13:09:35.310: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 24 13:09:35.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-9cvrb'
Dec 24 13:09:35.416: INFO: stderr: ""
Dec 24 13:09:35.416: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          28s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 24 13:09:35.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-9cvrb'
Dec 24 13:09:35.555: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 13:09:35.555: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 24 13:09:35.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-9cvrb'
Dec 24 13:09:35.713: INFO: stderr: "No resources found.\n"
Dec 24 13:09:35.713: INFO: stdout: ""
Dec 24 13:09:35.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-9cvrb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 24 13:09:35.930: INFO: stderr: ""
Dec 24 13:09:35.930: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:09:35.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9cvrb" for this suite.
Dec 24 13:09:43.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:09:43.253: INFO: namespace: e2e-tests-kubectl-9cvrb, resource: bindings, ignored listing per whitelist
Dec 24 13:09:43.308: INFO: namespace e2e-tests-kubectl-9cvrb deletion completed in 7.362509687s

• [SLOW TEST:37.520 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:09:43.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 13:09:43.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:09:53.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-b2plw" for this suite.
Dec 24 13:10:35.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:10:35.883: INFO: namespace: e2e-tests-pods-b2plw, resource: bindings, ignored listing per whitelist
Dec 24 13:10:35.935: INFO: namespace e2e-tests-pods-b2plw deletion completed in 42.193882158s

• [SLOW TEST:52.627 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:10:35.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 13:10:36.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c619cba9-264e-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-pw4v4" to be "success or failure"
Dec 24 13:10:36.144: INFO: Pod "downwardapi-volume-c619cba9-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.70558ms
Dec 24 13:10:38.512: INFO: Pod "downwardapi-volume-c619cba9-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38964943s
Dec 24 13:10:40.543: INFO: Pod "downwardapi-volume-c619cba9-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.420303159s
Dec 24 13:10:42.920: INFO: Pod "downwardapi-volume-c619cba9-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.797302749s
Dec 24 13:10:44.935: INFO: Pod "downwardapi-volume-c619cba9-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.812206014s
Dec 24 13:10:47.045: INFO: Pod "downwardapi-volume-c619cba9-264e-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.922706594s
STEP: Saw pod success
Dec 24 13:10:47.045: INFO: Pod "downwardapi-volume-c619cba9-264e-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:10:47.088: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c619cba9-264e-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 13:10:47.189: INFO: Waiting for pod downwardapi-volume-c619cba9-264e-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:10:47.200: INFO: Pod downwardapi-volume-c619cba9-264e-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:10:47.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pw4v4" for this suite.
Dec 24 13:10:53.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:10:53.363: INFO: namespace: e2e-tests-projected-pw4v4, resource: bindings, ignored listing per whitelist
Dec 24 13:10:53.530: INFO: namespace e2e-tests-projected-pw4v4 deletion completed in 6.323008799s

• [SLOW TEST:17.594 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:10:53.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 24 13:10:53.944: INFO: Waiting up to 5m0s for pod "pod-d0b478de-264e-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-ppjl9" to be "success or failure"
Dec 24 13:10:53.960: INFO: Pod "pod-d0b478de-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.816677ms
Dec 24 13:10:56.340: INFO: Pod "pod-d0b478de-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396500927s
Dec 24 13:10:58.365: INFO: Pod "pod-d0b478de-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.420615527s
Dec 24 13:11:00.379: INFO: Pod "pod-d0b478de-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435513273s
Dec 24 13:11:02.721: INFO: Pod "pod-d0b478de-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.776881077s
Dec 24 13:11:04.733: INFO: Pod "pod-d0b478de-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.788754869s
Dec 24 13:11:06.764: INFO: Pod "pod-d0b478de-264e-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.819882685s
STEP: Saw pod success
Dec 24 13:11:06.764: INFO: Pod "pod-d0b478de-264e-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:11:06.777: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d0b478de-264e-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 13:11:06.995: INFO: Waiting for pod pod-d0b478de-264e-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:11:07.007: INFO: Pod pod-d0b478de-264e-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:11:07.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ppjl9" for this suite.
Dec 24 13:11:13.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:11:13.193: INFO: namespace: e2e-tests-emptydir-ppjl9, resource: bindings, ignored listing per whitelist
Dec 24 13:11:13.272: INFO: namespace e2e-tests-emptydir-ppjl9 deletion completed in 6.255000914s

• [SLOW TEST:19.742 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:11:13.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:11:21.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-n7xvv" for this suite.
Dec 24 13:11:27.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:11:27.506: INFO: namespace: e2e-tests-namespaces-n7xvv, resource: bindings, ignored listing per whitelist
Dec 24 13:11:27.527: INFO: namespace e2e-tests-namespaces-n7xvv deletion completed in 6.287977368s
STEP: Destroying namespace "e2e-tests-nsdeletetest-xnstv" for this suite.
Dec 24 13:11:27.529: INFO: Namespace e2e-tests-nsdeletetest-xnstv was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-fkq5x" for this suite.
Dec 24 13:11:33.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:11:33.831: INFO: namespace: e2e-tests-nsdeletetest-fkq5x, resource: bindings, ignored listing per whitelist
Dec 24 13:11:34.121: INFO: namespace e2e-tests-nsdeletetest-fkq5x deletion completed in 6.592141071s

• [SLOW TEST:20.848 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:11:34.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-e8d54260-264e-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 24 13:11:34.419: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e8d66e36-264e-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-vzz89" to be "success or failure"
Dec 24 13:11:34.449: INFO: Pod "pod-projected-secrets-e8d66e36-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.423347ms
Dec 24 13:11:36.476: INFO: Pod "pod-projected-secrets-e8d66e36-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056335675s
Dec 24 13:11:38.499: INFO: Pod "pod-projected-secrets-e8d66e36-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078990799s
Dec 24 13:11:42.175: INFO: Pod "pod-projected-secrets-e8d66e36-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.755640592s
Dec 24 13:11:44.339: INFO: Pod "pod-projected-secrets-e8d66e36-264e-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.919247474s
Dec 24 13:11:46.393: INFO: Pod "pod-projected-secrets-e8d66e36-264e-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.973246117s
STEP: Saw pod success
Dec 24 13:11:46.393: INFO: Pod "pod-projected-secrets-e8d66e36-264e-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:11:46.414: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e8d66e36-264e-11ea-b7c4-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 24 13:11:46.573: INFO: Waiting for pod pod-projected-secrets-e8d66e36-264e-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:11:46.591: INFO: Pod pod-projected-secrets-e8d66e36-264e-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:11:46.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vzz89" for this suite.
Dec 24 13:11:54.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:11:54.854: INFO: namespace: e2e-tests-projected-vzz89, resource: bindings, ignored listing per whitelist
Dec 24 13:11:54.961: INFO: namespace e2e-tests-projected-vzz89 deletion completed in 8.359984294s

• [SLOW TEST:20.840 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:11:54.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Dec 24 13:11:55.193: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-rxb49" to be "success or failure"
Dec 24 13:11:55.388: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 194.56982ms
Dec 24 13:11:57.787: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.593501301s
Dec 24 13:11:59.802: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.609059601s
Dec 24 13:12:01.939: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.745629785s
Dec 24 13:12:03.953: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.760119109s
Dec 24 13:12:05.968: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.775173375s
Dec 24 13:12:08.008: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.81499537s
Dec 24 13:12:10.022: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.829126998s
Dec 24 13:12:12.041: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.848043365s
STEP: Saw pod success
Dec 24 13:12:12.041: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 24 13:12:12.047: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 24 13:12:12.722: INFO: Waiting for pod pod-host-path-test to disappear
Dec 24 13:12:14.538: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:12:14.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-rxb49" for this suite.
Dec 24 13:12:22.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:12:22.946: INFO: namespace: e2e-tests-hostpath-rxb49, resource: bindings, ignored listing per whitelist
Dec 24 13:12:23.050: INFO: namespace e2e-tests-hostpath-rxb49 deletion completed in 8.467529405s

• [SLOW TEST:28.089 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:12:23.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Dec 24 13:12:23.238: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 24 13:12:23.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:12:23.736: INFO: stderr: ""
Dec 24 13:12:23.736: INFO: stdout: "service/redis-slave created\n"
Dec 24 13:12:23.738: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 24 13:12:23.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:12:24.443: INFO: stderr: ""
Dec 24 13:12:24.444: INFO: stdout: "service/redis-master created\n"
Dec 24 13:12:24.444: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 24 13:12:24.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:12:25.126: INFO: stderr: ""
Dec 24 13:12:25.126: INFO: stdout: "service/frontend created\n"
Dec 24 13:12:25.127: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 24 13:12:25.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:12:25.593: INFO: stderr: ""
Dec 24 13:12:25.593: INFO: stdout: "deployment.extensions/frontend created\n"
Dec 24 13:12:25.594: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 24 13:12:25.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:12:26.018: INFO: stderr: ""
Dec 24 13:12:26.018: INFO: stdout: "deployment.extensions/redis-master created\n"
Dec 24 13:12:26.021: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 24 13:12:26.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:12:26.823: INFO: stderr: ""
Dec 24 13:12:26.823: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Dec 24 13:12:26.823: INFO: Waiting for all frontend pods to be Running.
Dec 24 13:13:06.878: INFO: Waiting for frontend to serve content.
Dec 24 13:13:08.762: INFO: Trying to add a new entry to the guestbook.
Dec 24 13:13:08.801: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 24 13:13:08.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:13:09.183: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 13:13:09.183: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 24 13:13:09.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:13:09.562: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 13:13:09.562: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 24 13:13:09.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:13:10.042: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 13:13:10.042: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 24 13:13:10.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:13:10.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 13:13:10.157: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 24 13:13:10.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:13:10.370: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 13:13:10.371: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 24 13:13:10.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ppkkn'
Dec 24 13:13:10.741: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 13:13:10.741: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:13:10.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ppkkn" for this suite.
Dec 24 13:13:54.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:13:55.016: INFO: namespace: e2e-tests-kubectl-ppkkn, resource: bindings, ignored listing per whitelist
Dec 24 13:13:55.077: INFO: namespace e2e-tests-kubectl-ppkkn deletion completed in 44.314656202s

• [SLOW TEST:92.027 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:13:55.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-3cf47469-264f-11ea-b7c4-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-3cf47469-264f-11ea-b7c4-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:15:12.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7vszg" for this suite.
Dec 24 13:15:36.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:15:36.303: INFO: namespace: e2e-tests-configmap-7vszg, resource: bindings, ignored listing per whitelist
Dec 24 13:15:36.355: INFO: namespace e2e-tests-configmap-7vszg deletion completed in 24.25284859s

• [SLOW TEST:101.277 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:15:36.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 13:15:36.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-qptvh" to be "success or failure"
Dec 24 13:15:36.702: INFO: Pod "downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.963773ms
Dec 24 13:15:38.839: INFO: Pod "downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150978073s
Dec 24 13:15:40.927: INFO: Pod "downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239308782s
Dec 24 13:15:43.721: INFO: Pod "downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.033747503s
Dec 24 13:15:45.743: INFO: Pod "downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.055776678s
Dec 24 13:15:47.764: INFO: Pod "downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.076672474s
Dec 24 13:15:49.784: INFO: Pod "downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.096590376s
STEP: Saw pod success
Dec 24 13:15:49.784: INFO: Pod "downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:15:49.792: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 13:15:50.154: INFO: Waiting for pod downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:15:50.159: INFO: Pod downwardapi-volume-793cc026-264f-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:15:50.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qptvh" for this suite.
Dec 24 13:15:56.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:15:56.339: INFO: namespace: e2e-tests-projected-qptvh, resource: bindings, ignored listing per whitelist
Dec 24 13:15:56.414: INFO: namespace e2e-tests-projected-qptvh deletion completed in 6.247419832s

• [SLOW TEST:20.060 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:15:56.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 24 13:16:07.618: INFO: Successfully updated pod "pod-update-85457835-264f-11ea-b7c4-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Dec 24 13:16:07.672: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:16:07.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vvmzv" for this suite.
Dec 24 13:16:31.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:16:31.774: INFO: namespace: e2e-tests-pods-vvmzv, resource: bindings, ignored listing per whitelist
Dec 24 13:16:31.895: INFO: namespace e2e-tests-pods-vvmzv deletion completed in 24.213973365s

• [SLOW TEST:35.481 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:16:31.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 24 13:16:46.834: INFO: Waiting up to 5m0s for pod "client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005" in namespace "e2e-tests-pods-jthjz" to be "success or failure"
Dec 24 13:16:46.853: INFO: Pod "client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.180503ms
Dec 24 13:16:49.384: INFO: Pod "client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.549515847s
Dec 24 13:16:51.400: INFO: Pod "client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.565721853s
Dec 24 13:16:54.489: INFO: Pod "client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.655282342s
Dec 24 13:16:56.510: INFO: Pod "client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.676017389s
Dec 24 13:16:58.556: INFO: Pod "client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.722143268s
Dec 24 13:17:00.888: INFO: Pod "client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.053818608s
STEP: Saw pod success
Dec 24 13:17:00.888: INFO: Pod "client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:17:00.915: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005 container env3cont: 
STEP: delete the pod
Dec 24 13:17:01.170: INFO: Waiting for pod client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:17:01.184: INFO: Pod client-envvars-a30adfe6-264f-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:17:01.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jthjz" for this suite.
Dec 24 13:17:45.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:17:45.327: INFO: namespace: e2e-tests-pods-jthjz, resource: bindings, ignored listing per whitelist
Dec 24 13:17:45.416: INFO: namespace e2e-tests-pods-jthjz deletion completed in 44.22396998s

• [SLOW TEST:73.521 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:17:45.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-wprhh
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 24 13:17:45.674: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 24 13:18:26.123: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-wprhh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 13:18:26.123: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 13:18:26.548: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:18:26.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-wprhh" for this suite.
Dec 24 13:18:54.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:18:54.756: INFO: namespace: e2e-tests-pod-network-test-wprhh, resource: bindings, ignored listing per whitelist
Dec 24 13:18:54.819: INFO: namespace e2e-tests-pod-network-test-wprhh deletion completed in 28.233047035s

• [SLOW TEST:69.402 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:18:54.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w5gg7 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-w5gg7;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w5gg7 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-w5gg7;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w5gg7.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-w5gg7.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w5gg7.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-w5gg7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-w5gg7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 192.14.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.14.192_udp@PTR;check="$$(dig +tcp +noall +answer +search 192.14.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.14.192_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w5gg7 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-w5gg7;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w5gg7 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-w5gg7;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w5gg7.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-w5gg7.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w5gg7.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-w5gg7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-w5gg7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 192.14.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.14.192_udp@PTR;check="$$(dig +tcp +noall +answer +search 192.14.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.14.192_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 24 13:19:11.535: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.544: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.555: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-w5gg7 from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.563: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-w5gg7 from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.572: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.579: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.584: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.590: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.598: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.605: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.610: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.616: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.621: INFO: Unable to read 10.105.14.192_udp@PTR from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.626: INFO: Unable to read 10.105.14.192_tcp@PTR from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.632: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.638: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.643: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5gg7 from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.648: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5gg7 from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.653: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.659: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.663: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.668: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.673: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.678: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.685: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.692: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.697: INFO: Unable to read 10.105.14.192_udp@PTR from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.704: INFO: Unable to read 10.105.14.192_tcp@PTR from pod e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005: the server could not find the requested resource (get pods dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005)
Dec 24 13:19:11.704: INFO: Lookups using e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-w5gg7 wheezy_tcp@dns-test-service.e2e-tests-dns-w5gg7 wheezy_udp@dns-test-service.e2e-tests-dns-w5gg7.svc wheezy_tcp@dns-test-service.e2e-tests-dns-w5gg7.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.105.14.192_udp@PTR 10.105.14.192_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w5gg7 jessie_tcp@dns-test-service.e2e-tests-dns-w5gg7 jessie_udp@dns-test-service.e2e-tests-dns-w5gg7.svc jessie_tcp@dns-test-service.e2e-tests-dns-w5gg7.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5gg7.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-w5gg7.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.105.14.192_udp@PTR 10.105.14.192_tcp@PTR]

Dec 24 13:19:16.855: INFO: DNS probes using e2e-tests-dns-w5gg7/dns-test-efb8aba3-264f-11ea-b7c4-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:19:17.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-w5gg7" for this suite.
Dec 24 13:19:25.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:19:25.424: INFO: namespace: e2e-tests-dns-w5gg7, resource: bindings, ignored listing per whitelist
Dec 24 13:19:25.499: INFO: namespace e2e-tests-dns-w5gg7 deletion completed in 8.316234902s

• [SLOW TEST:30.678 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:19:25.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 24 13:19:25.758: INFO: Waiting up to 5m0s for pod "pod-01c5e735-2650-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-5bhz7" to be "success or failure"
Dec 24 13:19:25.790: INFO: Pod "pod-01c5e735-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.810351ms
Dec 24 13:19:28.001: INFO: Pod "pod-01c5e735-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242460563s
Dec 24 13:19:30.050: INFO: Pod "pod-01c5e735-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291311785s
Dec 24 13:19:32.187: INFO: Pod "pod-01c5e735-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428310395s
Dec 24 13:19:34.200: INFO: Pod "pod-01c5e735-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.441752782s
Dec 24 13:19:36.226: INFO: Pod "pod-01c5e735-2650-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.467647941s
STEP: Saw pod success
Dec 24 13:19:36.226: INFO: Pod "pod-01c5e735-2650-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:19:36.240: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-01c5e735-2650-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 13:19:36.308: INFO: Waiting for pod pod-01c5e735-2650-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:19:36.311: INFO: Pod pod-01c5e735-2650-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:19:36.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5bhz7" for this suite.
Dec 24 13:19:42.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:19:42.782: INFO: namespace: e2e-tests-emptydir-5bhz7, resource: bindings, ignored listing per whitelist
Dec 24 13:19:42.822: INFO: namespace e2e-tests-emptydir-5bhz7 deletion completed in 6.505818686s

• [SLOW TEST:17.322 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:19:42.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 24 13:19:43.002: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:20:05.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-z87mj" for this suite.
Dec 24 13:20:29.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:20:29.666: INFO: namespace: e2e-tests-init-container-z87mj, resource: bindings, ignored listing per whitelist
Dec 24 13:20:29.755: INFO: namespace e2e-tests-init-container-z87mj deletion completed in 24.212175704s

• [SLOW TEST:46.932 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:20:29.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 24 13:20:30.074: INFO: Number of nodes with available pods: 0
Dec 24 13:20:30.074: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:31.099: INFO: Number of nodes with available pods: 0
Dec 24 13:20:31.099: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:32.305: INFO: Number of nodes with available pods: 0
Dec 24 13:20:32.305: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:33.093: INFO: Number of nodes with available pods: 0
Dec 24 13:20:33.093: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:34.094: INFO: Number of nodes with available pods: 0
Dec 24 13:20:34.094: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:35.235: INFO: Number of nodes with available pods: 0
Dec 24 13:20:35.235: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:36.088: INFO: Number of nodes with available pods: 0
Dec 24 13:20:36.088: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:37.092: INFO: Number of nodes with available pods: 0
Dec 24 13:20:37.092: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:38.094: INFO: Number of nodes with available pods: 0
Dec 24 13:20:38.094: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:39.107: INFO: Number of nodes with available pods: 0
Dec 24 13:20:39.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:40.113: INFO: Number of nodes with available pods: 1
Dec 24 13:20:40.113: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 24 13:20:40.393: INFO: Number of nodes with available pods: 0
Dec 24 13:20:40.393: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:41.431: INFO: Number of nodes with available pods: 0
Dec 24 13:20:41.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:42.426: INFO: Number of nodes with available pods: 0
Dec 24 13:20:42.426: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:43.575: INFO: Number of nodes with available pods: 0
Dec 24 13:20:43.575: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:44.502: INFO: Number of nodes with available pods: 0
Dec 24 13:20:44.502: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:45.408: INFO: Number of nodes with available pods: 0
Dec 24 13:20:45.408: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:46.421: INFO: Number of nodes with available pods: 0
Dec 24 13:20:46.421: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:47.448: INFO: Number of nodes with available pods: 0
Dec 24 13:20:47.448: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:48.421: INFO: Number of nodes with available pods: 0
Dec 24 13:20:48.421: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:49.419: INFO: Number of nodes with available pods: 0
Dec 24 13:20:49.419: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:50.417: INFO: Number of nodes with available pods: 0
Dec 24 13:20:50.417: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:51.442: INFO: Number of nodes with available pods: 0
Dec 24 13:20:51.442: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:52.411: INFO: Number of nodes with available pods: 0
Dec 24 13:20:52.411: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:53.461: INFO: Number of nodes with available pods: 0
Dec 24 13:20:53.461: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:54.876: INFO: Number of nodes with available pods: 0
Dec 24 13:20:54.876: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:55.492: INFO: Number of nodes with available pods: 0
Dec 24 13:20:55.492: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:56.441: INFO: Number of nodes with available pods: 0
Dec 24 13:20:56.441: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:57.429: INFO: Number of nodes with available pods: 0
Dec 24 13:20:57.429: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:58.438: INFO: Number of nodes with available pods: 0
Dec 24 13:20:58.438: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 24 13:20:59.416: INFO: Number of nodes with available pods: 1
Dec 24 13:20:59.416: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-86cxt, will wait for the garbage collector to delete the pods
Dec 24 13:20:59.498: INFO: Deleting DaemonSet.extensions daemon-set took: 18.808045ms
Dec 24 13:20:59.699: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.460694ms
Dec 24 13:21:07.024: INFO: Number of nodes with available pods: 0
Dec 24 13:21:07.024: INFO: Number of running nodes: 0, number of available pods: 0
Dec 24 13:21:07.033: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-86cxt/daemonsets","resourceVersion":"15909883"},"items":null}

Dec 24 13:21:07.047: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-86cxt/pods","resourceVersion":"15909883"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:21:07.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-86cxt" for this suite.
Dec 24 13:21:13.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:21:13.306: INFO: namespace: e2e-tests-daemonsets-86cxt, resource: bindings, ignored listing per whitelist
Dec 24 13:21:13.485: INFO: namespace e2e-tests-daemonsets-86cxt deletion completed in 6.402272669s

• [SLOW TEST:43.729 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:21:13.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 24 13:21:13.809: INFO: Waiting up to 5m0s for pod "pod-422e4ef3-2650-11ea-b7c4-0242ac110005" in namespace "e2e-tests-emptydir-zb7vx" to be "success or failure"
Dec 24 13:21:13.905: INFO: Pod "pod-422e4ef3-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 95.787766ms
Dec 24 13:21:15.940: INFO: Pod "pod-422e4ef3-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130353807s
Dec 24 13:21:17.955: INFO: Pod "pod-422e4ef3-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14584575s
Dec 24 13:21:20.120: INFO: Pod "pod-422e4ef3-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.310268738s
Dec 24 13:21:22.489: INFO: Pod "pod-422e4ef3-2650-11ea-b7c4-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.679502663s
Dec 24 13:21:24.577: INFO: Pod "pod-422e4ef3-2650-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.767199153s
STEP: Saw pod success
Dec 24 13:21:24.577: INFO: Pod "pod-422e4ef3-2650-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:21:24.595: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-422e4ef3-2650-11ea-b7c4-0242ac110005 container test-container: 
STEP: delete the pod
Dec 24 13:21:25.001: INFO: Waiting for pod pod-422e4ef3-2650-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:21:25.019: INFO: Pod pod-422e4ef3-2650-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:21:25.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zb7vx" for this suite.
Dec 24 13:21:31.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:21:31.204: INFO: namespace: e2e-tests-emptydir-zb7vx, resource: bindings, ignored listing per whitelist
Dec 24 13:21:31.268: INFO: namespace e2e-tests-emptydir-zb7vx deletion completed in 6.243215812s

• [SLOW TEST:17.783 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:21:31.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 24 13:21:31.585: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005" in namespace "e2e-tests-projected-8ghrt" to be "success or failure"
Dec 24 13:21:31.671: INFO: Pod "downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.953369ms
Dec 24 13:21:33.790: INFO: Pod "downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205539642s
Dec 24 13:21:35.812: INFO: Pod "downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227364447s
Dec 24 13:21:37.852: INFO: Pod "downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267088427s
Dec 24 13:21:39.984: INFO: Pod "downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.399634211s
Dec 24 13:21:42.080: INFO: Pod "downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.495123872s
Dec 24 13:21:44.093: INFO: Pod "downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.508537375s
Dec 24 13:21:46.105: INFO: Pod "downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.519818004s
STEP: Saw pod success
Dec 24 13:21:46.105: INFO: Pod "downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:21:46.111: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005 container client-container: 
STEP: delete the pod
Dec 24 13:21:48.157: INFO: Waiting for pod downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:21:48.186: INFO: Pod downwardapi-volume-4cc67253-2650-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:21:48.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8ghrt" for this suite.
Dec 24 13:21:54.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:21:54.375: INFO: namespace: e2e-tests-projected-8ghrt, resource: bindings, ignored listing per whitelist
Dec 24 13:21:54.611: INFO: namespace e2e-tests-projected-8ghrt deletion completed in 6.399499632s

• [SLOW TEST:23.344 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 24 13:21:54.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-kbq7f/configmap-test-5a9545af-2650-11ea-b7c4-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 24 13:21:54.833: INFO: Waiting up to 5m0s for pod "pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005" in namespace "e2e-tests-configmap-kbq7f" to be "success or failure"
Dec 24 13:21:54.857: INFO: Pod "pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.387284ms
Dec 24 13:21:57.028: INFO: Pod "pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194884271s
Dec 24 13:21:59.039: INFO: Pod "pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206130728s
Dec 24 13:22:01.090: INFO: Pod "pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256993282s
Dec 24 13:22:03.305: INFO: Pod "pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.472274199s
Dec 24 13:22:05.313: INFO: Pod "pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.480409813s
Dec 24 13:22:07.333: INFO: Pod "pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.499741832s
STEP: Saw pod success
Dec 24 13:22:07.333: INFO: Pod "pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005" satisfied condition "success or failure"
Dec 24 13:22:07.349: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005 container env-test: 
STEP: delete the pod
Dec 24 13:22:07.601: INFO: Waiting for pod pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005 to disappear
Dec 24 13:22:07.655: INFO: Pod pod-configmaps-5aa1f73b-2650-11ea-b7c4-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 24 13:22:07.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-kbq7f" for this suite.
Dec 24 13:22:13.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:22:13.836: INFO: namespace: e2e-tests-configmap-kbq7f, resource: bindings, ignored listing per whitelist
Dec 24 13:22:14.030: INFO: namespace e2e-tests-configmap-kbq7f deletion completed in 6.297408872s

• [SLOW TEST:19.418 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSDec 24 13:22:14.030: INFO: Running AfterSuite actions on all nodes
Dec 24 13:22:14.030: INFO: Running AfterSuite actions on node 1
Dec 24 13:22:14.030: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all pods are removed when a namespace is deleted [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161

Ran 199 of 2164 Specs in 9299.635 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (9300.06s)
FAIL