I0211 10:47:24.187291 9 e2e.go:224] Starting e2e run "e28499b2-4cbb-11ea-a6e3-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581418043 - Will randomize all specs Will run 201 of 2164 specs Feb 11 10:47:24.499: INFO: >>> kubeConfig: /root/.kube/config Feb 11 10:47:24.515: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 11 10:47:24.553: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 11 10:47:24.663: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 11 10:47:24.663: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 11 10:47:24.663: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 11 10:47:24.687: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 11 10:47:24.688: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 11 10:47:24.688: INFO: e2e test version: v1.13.12 Feb 11 10:47:24.691: INFO: kube-apiserver version: v1.13.8 SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:47:24.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Feb 11 10:47:25.061: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Feb 11 10:47:25.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gp5k5' Feb 11 10:47:27.444: INFO: stderr: "" Feb 11 10:47:27.445: INFO: stdout: "pod/pause created\n" Feb 11 10:47:27.445: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 11 10:47:27.446: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-gp5k5" to be "running and ready" Feb 11 10:47:27.468: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.990219ms Feb 11 10:47:29.885: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439708499s Feb 11 10:47:31.905: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.458771934s Feb 11 10:47:33.919: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.473663258s Feb 11 10:47:35.930: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.483851219s Feb 11 10:47:37.950: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.503741486s Feb 11 10:47:37.950: INFO: Pod "pause" satisfied condition "running and ready" Feb 11 10:47:37.950: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Feb 11 10:47:37.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-gp5k5' Feb 11 10:47:38.211: INFO: stderr: "" Feb 11 10:47:38.211: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 11 10:47:38.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-gp5k5' Feb 11 10:47:38.336: INFO: stderr: "" Feb 11 10:47:38.336: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 11 10:47:38.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-gp5k5' Feb 11 10:47:38.557: INFO: stderr: "" Feb 11 10:47:38.557: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 11 10:47:38.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-gp5k5' Feb 11 10:47:38.860: INFO: stderr: "" Feb 11 10:47:38.860: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Feb 11 10:47:38.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gp5k5' Feb 11 10:47:39.100: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 11 10:47:39.100: INFO: stdout: "pod \"pause\" force deleted\n" Feb 11 10:47:39.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-gp5k5' Feb 11 10:47:39.244: INFO: stderr: "No resources found.\n" Feb 11 10:47:39.244: INFO: stdout: "" Feb 11 10:47:39.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-gp5k5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 11 10:47:39.361: INFO: stderr: "" Feb 11 10:47:39.362: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:47:39.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gp5k5" for this suite. Feb 11 10:47:45.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:47:45.579: INFO: namespace: e2e-tests-kubectl-gp5k5, resource: bindings, ignored listing per whitelist Feb 11 10:47:45.635: INFO: namespace e2e-tests-kubectl-gp5k5 deletion completed in 6.243881902s • [SLOW TEST:20.944 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:47:45.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Feb 11 10:47:56.073: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-f01f3153-4cbb-11ea-a6e3-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-j9cvl", SelfLink:"/api/v1/namespaces/e2e-tests-pods-j9cvl/pods/pod-submit-remove-f01f3153-4cbb-11ea-a6e3-0242ac110005", UID:"f0227998-4cbb-11ea-a994-fa163e34d433", ResourceVersion:"21296688", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717014865, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"939321843"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5zgwn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00121e1c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5zgwn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b64cb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c13380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b64cf0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b64d10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b64d18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b64d1c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014866, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014874, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014874, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014865, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001b63ae0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001b63b00), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://07845dc4f0b9f59f390e10f48d975e9c354ad96e905f5a9bc31be230654a3da8"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:48:02.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-j9cvl" for this suite. Feb 11 10:48:08.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:48:09.003: INFO: namespace: e2e-tests-pods-j9cvl, resource: bindings, ignored listing per whitelist Feb 11 10:48:09.097: INFO: namespace e2e-tests-pods-j9cvl deletion completed in 6.150869847s • [SLOW TEST:23.461 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:48:09.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 11 10:48:09.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-qh782' Feb 11 10:48:09.445: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 11 10:48:09.446: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Feb 11 10:48:11.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-qh782' Feb 11 10:48:12.332: INFO: stderr: "" Feb 11 10:48:12.332: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:48:12.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qh782" for this suite. Feb 11 10:48:18.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:48:18.585: INFO: namespace: e2e-tests-kubectl-qh782, resource: bindings, ignored listing per whitelist Feb 11 10:48:18.681: INFO: namespace e2e-tests-kubectl-qh782 deletion completed in 6.33549609s • [SLOW TEST:9.584 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:48:18.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-03d079e3-4cbc-11ea-a6e3-0242ac110005 Feb 11 10:48:18.998: INFO: Pod name my-hostname-basic-03d079e3-4cbc-11ea-a6e3-0242ac110005: Found 0 pods out of 1 Feb 11 10:48:24.438: INFO: Pod name my-hostname-basic-03d079e3-4cbc-11ea-a6e3-0242ac110005: Found 1 pods out of 1 Feb 11 10:48:24.438: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-03d079e3-4cbc-11ea-a6e3-0242ac110005" are running Feb 11 10:48:28.582: INFO: Pod "my-hostname-basic-03d079e3-4cbc-11ea-a6e3-0242ac110005-dpbmr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 10:48:19 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 10:48:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-03d079e3-4cbc-11ea-a6e3-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 10:48:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-03d079e3-4cbc-11ea-a6e3-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 10:48:19 +0000 UTC Reason: Message:}]) Feb 11 10:48:28.583: INFO: Trying to dial the pod Feb 11 10:48:33.687: INFO: Controller my-hostname-basic-03d079e3-4cbc-11ea-a6e3-0242ac110005: Got expected result from replica 1 [my-hostname-basic-03d079e3-4cbc-11ea-a6e3-0242ac110005-dpbmr]: "my-hostname-basic-03d079e3-4cbc-11ea-a6e3-0242ac110005-dpbmr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:48:33.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-67b8r" for this suite. Feb 11 10:48:39.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:48:39.819: INFO: namespace: e2e-tests-replication-controller-67b8r, resource: bindings, ignored listing per whitelist Feb 11 10:48:39.901: INFO: namespace e2e-tests-replication-controller-67b8r deletion completed in 6.202064353s • [SLOW TEST:21.219 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:48:39.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-10685e5d-4cbc-11ea-a6e3-0242ac110005 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:48:54.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-48fwh" for this suite. Feb 11 10:49:18.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:49:18.447: INFO: namespace: e2e-tests-configmap-48fwh, resource: bindings, ignored listing per whitelist Feb 11 10:49:18.852: INFO: namespace e2e-tests-configmap-48fwh deletion completed in 24.568107869s • [SLOW TEST:38.951 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:49:18.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Feb 11 10:49:19.318: INFO: Waiting up to 5m0s for pod "var-expansion-27b6f889-4cbc-11ea-a6e3-0242ac110005" in namespace "e2e-tests-var-expansion-blntn" to be "success or failure" Feb 11 10:49:19.325: INFO: Pod "var-expansion-27b6f889-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342674ms Feb 11 10:49:21.634: INFO: Pod "var-expansion-27b6f889-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315361076s Feb 11 10:49:23.649: INFO: Pod "var-expansion-27b6f889-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331119165s Feb 11 10:49:25.680: INFO: Pod "var-expansion-27b6f889-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361244368s Feb 11 10:49:28.330: INFO: Pod "var-expansion-27b6f889-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.011380685s Feb 11 10:49:30.349: INFO: Pod "var-expansion-27b6f889-4cbc-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.030594509s STEP: Saw pod success Feb 11 10:49:30.349: INFO: Pod "var-expansion-27b6f889-4cbc-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 10:49:30.356: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-27b6f889-4cbc-11ea-a6e3-0242ac110005 container dapi-container: STEP: delete the pod Feb 11 10:49:30.493: INFO: Waiting for pod var-expansion-27b6f889-4cbc-11ea-a6e3-0242ac110005 to disappear Feb 11 10:49:30.536: INFO: Pod var-expansion-27b6f889-4cbc-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:49:30.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-blntn" for this suite. Feb 11 10:49:36.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:49:36.768: INFO: namespace: e2e-tests-var-expansion-blntn, resource: bindings, ignored listing per whitelist Feb 11 10:49:36.783: INFO: namespace e2e-tests-var-expansion-blntn deletion completed in 6.180882689s • [SLOW TEST:17.929 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:49:36.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:49:44.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-bfxd5" for this suite. Feb 11 10:49:50.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:49:50.980: INFO: namespace: e2e-tests-namespaces-bfxd5, resource: bindings, ignored listing per whitelist Feb 11 10:49:51.057: INFO: namespace e2e-tests-namespaces-bfxd5 deletion completed in 6.167069276s STEP: Destroying namespace "e2e-tests-nsdeletetest-ckxgp" for this suite. Feb 11 10:49:51.062: INFO: Namespace e2e-tests-nsdeletetest-ckxgp was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-dc2qc" for this suite. Feb 11 10:49:57.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:49:57.315: INFO: namespace: e2e-tests-nsdeletetest-dc2qc, resource: bindings, ignored listing per whitelist Feb 11 10:49:57.339: INFO: namespace e2e-tests-nsdeletetest-dc2qc deletion completed in 6.276785162s • [SLOW TEST:20.555 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:49:57.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 11 10:49:57.528: INFO: Creating deployment "test-recreate-deployment" Feb 11 10:49:57.539: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 11 10:49:57.554: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Feb 11 10:50:00.653: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 11 10:50:00.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 10:50:02.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 10:50:05.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 10:50:06.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717014997, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 10:50:08.932: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 11 10:50:08.952: INFO: Updating deployment test-recreate-deployment Feb 11 10:50:08.952: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 11 10:50:10.719: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-86w64,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-86w64/deployments/test-recreate-deployment,UID:3e8eac62-4cbc-11ea-a994-fa163e34d433,ResourceVersion:21297051,Generation:2,CreationTimestamp:2020-02-11 10:49:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-11 10:50:10 +0000 UTC 2020-02-11 10:50:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-11 10:50:10 +0000 UTC 2020-02-11 10:49:57 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 11 10:50:10.871: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-86w64,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-86w64/replicasets/test-recreate-deployment-589c4bfd,UID:459f8220-4cbc-11ea-a994-fa163e34d433,ResourceVersion:21297050,Generation:1,CreationTimestamp:2020-02-11 10:50:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3e8eac62-4cbc-11ea-a994-fa163e34d433 0xc001ab8a3f 0xc001ab8a50}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 11 10:50:10.872: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 11 10:50:10.872: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-86w64,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-86w64/replicasets/test-recreate-deployment-5bf7f65dc,UID:3e91e905-4cbc-11ea-a994-fa163e34d433,ResourceVersion:21297042,Generation:2,CreationTimestamp:2020-02-11 10:49:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3e8eac62-4cbc-11ea-a994-fa163e34d433 0xc001ab8b10 0xc001ab8b11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 11 10:50:10.883: INFO: Pod "test-recreate-deployment-589c4bfd-zlkqg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-zlkqg,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-86w64,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86w64/pods/test-recreate-deployment-589c4bfd-zlkqg,UID:45a58fa2-4cbc-11ea-a994-fa163e34d433,ResourceVersion:21297054,Generation:0,CreationTimestamp:2020-02-11 10:50:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 459f8220-4cbc-11ea-a994-fa163e34d433 0xc001ab978f 0xc001ab97a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gtvc2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtvc2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gtvc2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab9800} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab9820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 10:50:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 10:50:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 10:50:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 10:50:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-11 10:50:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:50:10.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-86w64" for this suite. Feb 11 10:50:22.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:50:23.009: INFO: namespace: e2e-tests-deployment-86w64, resource: bindings, ignored listing per whitelist Feb 11 10:50:23.097: INFO: namespace e2e-tests-deployment-86w64 deletion completed in 12.20298802s • [SLOW TEST:25.758 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:50:23.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-4de8c723-4cbc-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 11 10:50:23.303: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4de974fa-4cbc-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-92lng" to be "success or failure" Feb 11 10:50:23.314: INFO: Pod "pod-projected-configmaps-4de974fa-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.108401ms Feb 11 10:50:25.330: INFO: Pod "pod-projected-configmaps-4de974fa-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026972491s Feb 11 10:50:27.358: INFO: Pod "pod-projected-configmaps-4de974fa-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055553948s Feb 11 10:50:29.370: INFO: Pod "pod-projected-configmaps-4de974fa-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067120247s Feb 11 10:50:31.602: INFO: Pod "pod-projected-configmaps-4de974fa-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.299694272s Feb 11 10:50:33.983: INFO: Pod "pod-projected-configmaps-4de974fa-4cbc-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.680661995s STEP: Saw pod success Feb 11 10:50:33.984: INFO: Pod "pod-projected-configmaps-4de974fa-4cbc-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 10:50:33.996: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-4de974fa-4cbc-11ea-a6e3-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 11 10:50:34.384: INFO: Waiting for pod pod-projected-configmaps-4de974fa-4cbc-11ea-a6e3-0242ac110005 to disappear Feb 11 10:50:34.395: INFO: Pod pod-projected-configmaps-4de974fa-4cbc-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:50:34.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-92lng" for this suite. Feb 11 10:50:40.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:50:40.781: INFO: namespace: e2e-tests-projected-92lng, resource: bindings, ignored listing per whitelist Feb 11 10:50:40.927: INFO: namespace e2e-tests-projected-92lng deletion completed in 6.511856597s • [SLOW TEST:17.829 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:50:40.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Feb 11 10:50:51.224: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:51:17.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-64fkq" for this suite. Feb 11 10:51:23.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:51:24.150: INFO: namespace: e2e-tests-namespaces-64fkq, resource: bindings, ignored listing per whitelist Feb 11 10:51:24.308: INFO: namespace e2e-tests-namespaces-64fkq deletion completed in 6.675345167s STEP: Destroying namespace "e2e-tests-nsdeletetest-9lrmr" for this suite. Feb 11 10:51:24.314: INFO: Namespace e2e-tests-nsdeletetest-9lrmr was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-khlbz" for this suite. Feb 11 10:51:30.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:51:30.491: INFO: namespace: e2e-tests-nsdeletetest-khlbz, resource: bindings, ignored listing per whitelist Feb 11 10:51:30.548: INFO: namespace e2e-tests-nsdeletetest-khlbz deletion completed in 6.234290874s • [SLOW TEST:49.621 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:51:30.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-dq8q8 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Feb 11 10:51:30.885: INFO: Found 0 stateful pods, waiting for 3 Feb 11 10:51:40.935: INFO: Found 1 stateful pods, waiting for 3 Feb 11 10:51:50.924: INFO: Found 2 stateful pods, waiting for 3 Feb 11 10:52:00.927: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 11 10:52:00.927: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 11 10:52:00.927: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 11 10:52:00.968: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 11 10:52:11.037: INFO: Updating stateful set ss2 Feb 11 10:52:11.057: INFO: Waiting for Pod e2e-tests-statefulset-dq8q8/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Feb 11 10:52:21.710: INFO: Found 2 stateful pods, waiting for 3 Feb 11 10:52:32.110: INFO: Found 2 stateful pods, waiting for 3 Feb 11 10:52:43.484: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 11 10:52:43.484: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 11 10:52:43.485: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 11 10:52:51.761: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 11 10:52:51.761: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 11 10:52:51.761: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 11 10:52:51.883: INFO: Updating stateful set ss2 Feb 11 10:52:51.917: INFO: Waiting for Pod e2e-tests-statefulset-dq8q8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 10:53:01.950: INFO: Waiting for Pod e2e-tests-statefulset-dq8q8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 10:53:12.104: INFO: Updating stateful set ss2 Feb 11 10:53:12.385: INFO: Waiting for StatefulSet e2e-tests-statefulset-dq8q8/ss2 to complete update Feb 11 10:53:12.385: INFO: Waiting for Pod e2e-tests-statefulset-dq8q8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 10:53:22.427: INFO: Waiting for StatefulSet e2e-tests-statefulset-dq8q8/ss2 to complete update Feb 11 10:53:22.427: INFO: Waiting for Pod e2e-tests-statefulset-dq8q8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 10:53:32.418: INFO: Waiting for StatefulSet e2e-tests-statefulset-dq8q8/ss2 to complete update Feb 11 10:53:32.418: INFO: Waiting for Pod e2e-tests-statefulset-dq8q8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 11 10:53:42.455: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dq8q8 Feb 11 10:53:42.470: INFO: Scaling statefulset ss2 to 0 Feb 11 10:54:02.564: INFO: Waiting for statefulset status.replicas updated to 0 Feb 11 10:54:02.588: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:54:02.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-dq8q8" for this suite. Feb 11 10:54:10.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:54:10.950: INFO: namespace: e2e-tests-statefulset-dq8q8, resource: bindings, ignored listing per whitelist Feb 11 10:54:10.959: INFO: namespace e2e-tests-statefulset-dq8q8 deletion completed in 8.190918101s • [SLOW TEST:160.410 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:54:10.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 11 10:54:11.111: INFO: Waiting up to 5m0s for pod "pod-d5b29415-4cbc-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-85hsn" to be "success or failure" Feb 11 10:54:11.120: INFO: Pod "pod-d5b29415-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.266051ms Feb 11 10:54:13.136: INFO: Pod "pod-d5b29415-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024892623s Feb 11 10:54:15.172: INFO: Pod "pod-d5b29415-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06110532s Feb 11 10:54:17.190: INFO: Pod "pod-d5b29415-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079105282s Feb 11 10:54:19.209: INFO: Pod "pod-d5b29415-4cbc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09825102s Feb 11 10:54:21.228: INFO: Pod "pod-d5b29415-4cbc-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11693425s STEP: Saw pod success Feb 11 10:54:21.228: INFO: Pod "pod-d5b29415-4cbc-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 10:54:21.233: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d5b29415-4cbc-11ea-a6e3-0242ac110005 container test-container: STEP: delete the pod Feb 11 10:54:21.508: INFO: Waiting for pod pod-d5b29415-4cbc-11ea-a6e3-0242ac110005 to disappear Feb 11 10:54:21.543: INFO: Pod pod-d5b29415-4cbc-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:54:21.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-85hsn" for this suite. Feb 11 10:54:29.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:54:29.129: INFO: namespace: e2e-tests-emptydir-85hsn, resource: bindings, ignored listing per whitelist Feb 11 10:54:29.345: INFO: namespace e2e-tests-emptydir-85hsn deletion completed in 6.667686866s • [SLOW TEST:18.386 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:54:29.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-e0afb369-4cbc-11ea-a6e3-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-e0afb458-4cbc-11ea-a6e3-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e0afb369-4cbc-11ea-a6e3-0242ac110005 STEP: Updating configmap cm-test-opt-upd-e0afb458-4cbc-11ea-a6e3-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-e0afb47b-4cbc-11ea-a6e3-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:56:03.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-44dcl" for this suite. Feb 11 10:56:29.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:56:29.187: INFO: namespace: e2e-tests-projected-44dcl, resource: bindings, ignored listing per whitelist Feb 11 10:56:29.322: INFO: namespace e2e-tests-projected-44dcl deletion completed in 26.204143806s • [SLOW TEST:119.976 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:56:29.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Feb 11 10:56:29.548: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:56:29.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kkdbj" for this suite. Feb 11 10:56:35.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:56:35.812: INFO: namespace: e2e-tests-kubectl-kkdbj, resource: bindings, ignored listing per whitelist Feb 11 10:56:35.949: INFO: namespace e2e-tests-kubectl-kkdbj deletion completed in 6.219547537s • [SLOW TEST:6.626 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:56:35.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 11 10:56:36.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c2e7356-4cbd-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-dnx8g" to be "success or failure" Feb 11 10:56:36.228: INFO: Pod "downwardapi-volume-2c2e7356-4cbd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.505363ms Feb 11 10:56:38.260: INFO: Pod "downwardapi-volume-2c2e7356-4cbd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046157034s Feb 11 10:56:40.280: INFO: Pod "downwardapi-volume-2c2e7356-4cbd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065797551s Feb 11 10:56:42.309: INFO: Pod "downwardapi-volume-2c2e7356-4cbd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094229733s Feb 11 10:56:44.324: INFO: Pod "downwardapi-volume-2c2e7356-4cbd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109752361s Feb 11 10:56:46.350: INFO: Pod "downwardapi-volume-2c2e7356-4cbd-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136081936s STEP: Saw pod success Feb 11 10:56:46.351: INFO: Pod "downwardapi-volume-2c2e7356-4cbd-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 10:56:46.362: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2c2e7356-4cbd-11ea-a6e3-0242ac110005 container client-container: STEP: delete the pod Feb 11 10:56:46.574: INFO: Waiting for pod downwardapi-volume-2c2e7356-4cbd-11ea-a6e3-0242ac110005 to disappear Feb 11 10:56:46.662: INFO: Pod downwardapi-volume-2c2e7356-4cbd-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:56:46.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dnx8g" for this suite. Feb 11 10:56:52.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:56:52.833: INFO: namespace: e2e-tests-downward-api-dnx8g, resource: bindings, ignored listing per whitelist Feb 11 10:56:52.884: INFO: namespace e2e-tests-downward-api-dnx8g deletion completed in 6.214735272s • [SLOW TEST:16.935 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:56:52.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-3631d364-4cbd-11ea-a6e3-0242ac110005 STEP: Creating secret with name s-test-opt-upd-3631d53e-4cbd-11ea-a6e3-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3631d364-4cbd-11ea-a6e3-0242ac110005 STEP: Updating secret s-test-opt-upd-3631d53e-4cbd-11ea-a6e3-0242ac110005 STEP: Creating secret with name s-test-opt-create-3631d55a-4cbd-11ea-a6e3-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:58:32.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lk6g6" for this suite. Feb 11 10:58:58.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:58:58.710: INFO: namespace: e2e-tests-secrets-lk6g6, resource: bindings, ignored listing per whitelist Feb 11 10:58:58.710: INFO: namespace e2e-tests-secrets-lk6g6 deletion completed in 26.280804826s • [SLOW TEST:125.825 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:58:58.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0211 10:59:02.122157 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 11 10:59:02.122: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:59:02.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mc4vf" for this suite. Feb 11 10:59:08.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:59:08.447: INFO: namespace: e2e-tests-gc-mc4vf, resource: bindings, ignored listing per whitelist Feb 11 10:59:08.508: INFO: namespace e2e-tests-gc-mc4vf deletion completed in 6.350936181s • [SLOW TEST:9.798 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:59:08.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Feb 11 10:59:08.913: INFO: Waiting up to 5m0s for pod "var-expansion-8721c4a7-4cbd-11ea-a6e3-0242ac110005" in namespace "e2e-tests-var-expansion-kmdz6" to be "success or failure" Feb 11 10:59:08.931: INFO: Pod "var-expansion-8721c4a7-4cbd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.863426ms Feb 11 10:59:10.946: INFO: Pod "var-expansion-8721c4a7-4cbd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032359878s Feb 11 10:59:12.978: INFO: Pod "var-expansion-8721c4a7-4cbd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064629476s Feb 11 10:59:15.621: INFO: Pod "var-expansion-8721c4a7-4cbd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.708130731s Feb 11 10:59:17.709: INFO: Pod "var-expansion-8721c4a7-4cbd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.795552637s Feb 11 10:59:19.721: INFO: Pod "var-expansion-8721c4a7-4cbd-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.807666048s STEP: Saw pod success Feb 11 10:59:19.721: INFO: Pod "var-expansion-8721c4a7-4cbd-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 10:59:19.727: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-8721c4a7-4cbd-11ea-a6e3-0242ac110005 container dapi-container: STEP: delete the pod Feb 11 10:59:20.328: INFO: Waiting for pod var-expansion-8721c4a7-4cbd-11ea-a6e3-0242ac110005 to disappear Feb 11 10:59:20.385: INFO: Pod var-expansion-8721c4a7-4cbd-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 10:59:20.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-kmdz6" for this suite. Feb 11 10:59:26.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 10:59:26.750: INFO: namespace: e2e-tests-var-expansion-kmdz6, resource: bindings, ignored listing per whitelist Feb 11 10:59:26.934: INFO: namespace e2e-tests-var-expansion-kmdz6 deletion completed in 6.426853477s • [SLOW TEST:18.425 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 10:59:26.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 11 10:59:47.319: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 10:59:47.353: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 10:59:49.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 10:59:49.371: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 10:59:51.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 10:59:51.380: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 10:59:53.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 10:59:53.381: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 10:59:55.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 10:59:55.374: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 10:59:57.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 10:59:57.371: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 10:59:59.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 10:59:59.375: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 11:00:01.354: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 11:00:01.387: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 11:00:03.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 11:00:03.373: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 11:00:05.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 11:00:06.077: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 11:00:07.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 11:00:07.705: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 11:00:09.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 11:00:09.365: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 11:00:11.354: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 11:00:11.377: INFO: Pod pod-with-prestop-exec-hook still exists Feb 11 11:00:13.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 11 11:00:13.382: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:00:13.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-868z9" for this suite. Feb 11 11:00:37.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:00:37.679: INFO: namespace: e2e-tests-container-lifecycle-hook-868z9, resource: bindings, ignored listing per whitelist Feb 11 11:00:37.727: INFO: namespace e2e-tests-container-lifecycle-hook-868z9 deletion completed in 24.236810207s • [SLOW TEST:70.793 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:00:37.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-bc49f2a8-4cbd-11ea-a6e3-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-bc49f2a8-4cbd-11ea-a6e3-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:00:50.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-stjsh" for this suite. Feb 11 11:01:14.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:01:14.604: INFO: namespace: e2e-tests-projected-stjsh, resource: bindings, ignored listing per whitelist Feb 11 11:01:14.617: INFO: namespace e2e-tests-projected-stjsh deletion completed in 24.232687683s • [SLOW TEST:36.888 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:01:14.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Feb 11 11:01:15.364: INFO: Waiting up to 5m0s for pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp" in namespace "e2e-tests-svcaccounts-6lqrg" to be "success or failure" Feb 11 11:01:15.383: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.331732ms Feb 11 11:01:17.462: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09777041s Feb 11 11:01:19.476: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111952524s Feb 11 11:01:21.758: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393816833s Feb 11 11:01:23.781: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.416528203s Feb 11 11:01:25.807: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.442299983s Feb 11 11:01:28.184: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.819294428s Feb 11 11:01:30.297: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.933106104s Feb 11 11:01:32.312: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.947878269s STEP: Saw pod success Feb 11 11:01:32.312: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp" satisfied condition "success or failure" Feb 11 11:01:32.317: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp container token-test: STEP: delete the pod Feb 11 11:01:33.057: INFO: Waiting for pod pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp to disappear Feb 11 11:01:33.067: INFO: Pod pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-d9hwp no longer exists STEP: Creating a pod to test consume service account root CA Feb 11 11:01:33.146: INFO: Waiting up to 5m0s for pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh" in namespace "e2e-tests-svcaccounts-6lqrg" to be "success or failure" Feb 11 11:01:33.183: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh": Phase="Pending", Reason="", readiness=false. Elapsed: 37.160565ms Feb 11 11:01:35.625: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47887537s Feb 11 11:01:37.646: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.499595662s Feb 11 11:01:39.680: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.533637606s Feb 11 11:01:41.928: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.781661946s Feb 11 11:01:43.951: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.805125621s Feb 11 11:01:45.965: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.818783799s Feb 11 11:01:48.043: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.896712874s Feb 11 11:01:50.074: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.927928688s STEP: Saw pod success Feb 11 11:01:50.074: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh" satisfied condition "success or failure" Feb 11 11:01:50.096: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh container root-ca-test: STEP: delete the pod Feb 11 11:01:51.252: INFO: Waiting for pod pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh to disappear Feb 11 11:01:51.409: INFO: Pod pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-7j2rh no longer exists STEP: Creating a pod to test consume service account namespace Feb 11 11:01:51.433: INFO: Waiting up to 5m0s for pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg" in namespace "e2e-tests-svcaccounts-6lqrg" to be "success or failure" Feb 11 11:01:51.455: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg": Phase="Pending", Reason="", readiness=false. Elapsed: 22.651853ms Feb 11 11:01:53.473: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040218195s Feb 11 11:01:55.479: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046641107s Feb 11 11:01:58.099: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.666769726s Feb 11 11:02:00.202: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.769525392s Feb 11 11:02:02.232: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.799118618s Feb 11 11:02:04.265: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.832018387s Feb 11 11:02:06.288: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.854939753s Feb 11 11:02:08.303: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg": Phase="Pending", Reason="", readiness=false. Elapsed: 16.870835862s Feb 11 11:02:10.589: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.156224648s STEP: Saw pod success Feb 11 11:02:10.589: INFO: Pod "pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg" satisfied condition "success or failure" Feb 11 11:02:10.602: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg container namespace-test: STEP: delete the pod Feb 11 11:02:10.833: INFO: Waiting for pod pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg to disappear Feb 11 11:02:10.890: INFO: Pod pod-service-account-d28f630e-4cbd-11ea-a6e3-0242ac110005-m49zg no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:02:10.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-6lqrg" for this suite. Feb 11 11:02:18.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:02:19.064: INFO: namespace: e2e-tests-svcaccounts-6lqrg, resource: bindings, ignored listing per whitelist Feb 11 11:02:19.085: INFO: namespace e2e-tests-svcaccounts-6lqrg deletion completed in 8.181004142s • [SLOW TEST:64.467 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:02:19.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 11 11:02:19.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-j9dgk' Feb 11 11:02:21.831: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 11 11:02:21.832: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 11 11:02:21.872: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Feb 11 11:02:21.908: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 11 11:02:22.064: INFO: scanned /root for discovery docs: Feb 11 11:02:22.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-j9dgk' Feb 11 11:02:50.466: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 11 11:02:50.466: INFO: stdout: "Created e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01\nScaling up e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 11 11:02:50.466: INFO: stdout: "Created e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01\nScaling up e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 11 11:02:50.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-j9dgk' Feb 11 11:02:50.681: INFO: stderr: "" Feb 11 11:02:50.682: INFO: stdout: "e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01-dr469 " Feb 11 11:02:50.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01-dr469 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j9dgk' Feb 11 11:02:50.780: INFO: stderr: "" Feb 11 11:02:50.780: INFO: stdout: "true" Feb 11 11:02:50.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01-dr469 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j9dgk' Feb 11 11:02:50.899: INFO: stderr: "" Feb 11 11:02:50.899: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 11 11:02:50.899: INFO: e2e-test-nginx-rc-d6758346902053fd6d47ba3b350d4a01-dr469 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Feb 11 11:02:50.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-j9dgk' Feb 11 11:02:51.069: INFO: stderr: "" Feb 11 11:02:51.069: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:02:51.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j9dgk" for this suite. Feb 11 11:03:15.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:03:15.278: INFO: namespace: e2e-tests-kubectl-j9dgk, resource: bindings, ignored listing per whitelist Feb 11 11:03:15.339: INFO: namespace e2e-tests-kubectl-j9dgk deletion completed in 24.228923077s • [SLOW TEST:56.254 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:03:15.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-1a2ecc6f-4cbe-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume secrets Feb 11 11:03:15.524: INFO: Waiting up to 5m0s for pod "pod-secrets-1a30ba49-4cbe-11ea-a6e3-0242ac110005" in namespace "e2e-tests-secrets-lsbws" to be "success or failure" Feb 11 11:03:15.558: INFO: Pod "pod-secrets-1a30ba49-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.829785ms Feb 11 11:03:18.235: INFO: Pod "pod-secrets-1a30ba49-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.710375036s Feb 11 11:03:20.247: INFO: Pod "pod-secrets-1a30ba49-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722695267s Feb 11 11:03:22.261: INFO: Pod "pod-secrets-1a30ba49-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.736370084s Feb 11 11:03:24.434: INFO: Pod "pod-secrets-1a30ba49-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.909791243s Feb 11 11:03:26.624: INFO: Pod "pod-secrets-1a30ba49-4cbe-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.099448696s STEP: Saw pod success Feb 11 11:03:26.624: INFO: Pod "pod-secrets-1a30ba49-4cbe-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:03:26.631: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1a30ba49-4cbe-11ea-a6e3-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 11 11:03:26.802: INFO: Waiting for pod pod-secrets-1a30ba49-4cbe-11ea-a6e3-0242ac110005 to disappear Feb 11 11:03:26.818: INFO: Pod pod-secrets-1a30ba49-4cbe-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:03:26.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lsbws" for this suite. Feb 11 11:03:33.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:03:33.146: INFO: namespace: e2e-tests-secrets-lsbws, resource: bindings, ignored listing per whitelist Feb 11 11:03:33.173: INFO: namespace e2e-tests-secrets-lsbws deletion completed in 6.347352817s • [SLOW TEST:17.834 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:03:33.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xqqx2 Feb 11 11:03:43.465: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xqqx2 STEP: checking the pod's current state and verifying that restartCount is present Feb 11 11:03:43.471: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:07:44.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xqqx2" for this suite. Feb 11 11:07:50.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:07:50.439: INFO: namespace: e2e-tests-container-probe-xqqx2, resource: bindings, ignored listing per whitelist Feb 11 11:07:50.656: INFO: namespace e2e-tests-container-probe-xqqx2 deletion completed in 6.32790146s • [SLOW TEST:257.482 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:07:50.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-7p2v STEP: Creating a pod to test atomic-volume-subpath Feb 11 11:07:50.928: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7p2v" in namespace "e2e-tests-subpath-9b27j" to be "success or failure" Feb 11 11:07:50.943: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Pending", Reason="", readiness=false. Elapsed: 14.682384ms Feb 11 11:07:53.100: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171892216s Feb 11 11:07:55.115: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186745907s Feb 11 11:07:57.398: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.47024722s Feb 11 11:07:59.427: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.498645861s Feb 11 11:08:01.444: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.515398271s Feb 11 11:08:03.465: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Pending", Reason="", readiness=false. Elapsed: 12.536516201s Feb 11 11:08:05.478: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Pending", Reason="", readiness=false. Elapsed: 14.550251232s Feb 11 11:08:07.502: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Running", Reason="", readiness=false. Elapsed: 16.573791522s Feb 11 11:08:09.588: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Running", Reason="", readiness=false. Elapsed: 18.659355132s Feb 11 11:08:11.611: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Running", Reason="", readiness=false. Elapsed: 20.682363946s Feb 11 11:08:13.656: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Running", Reason="", readiness=false. Elapsed: 22.727836141s Feb 11 11:08:15.677: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Running", Reason="", readiness=false. Elapsed: 24.74854356s Feb 11 11:08:17.694: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Running", Reason="", readiness=false. Elapsed: 26.76625348s Feb 11 11:08:19.707: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Running", Reason="", readiness=false. Elapsed: 28.779278572s Feb 11 11:08:21.736: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Running", Reason="", readiness=false. Elapsed: 30.807390866s Feb 11 11:08:23.749: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Running", Reason="", readiness=false. Elapsed: 32.820396088s Feb 11 11:08:25.762: INFO: Pod "pod-subpath-test-configmap-7p2v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.833614833s STEP: Saw pod success Feb 11 11:08:25.762: INFO: Pod "pod-subpath-test-configmap-7p2v" satisfied condition "success or failure" Feb 11 11:08:25.766: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-7p2v container test-container-subpath-configmap-7p2v: STEP: delete the pod Feb 11 11:08:25.913: INFO: Waiting for pod pod-subpath-test-configmap-7p2v to disappear Feb 11 11:08:26.072: INFO: Pod pod-subpath-test-configmap-7p2v no longer exists STEP: Deleting pod pod-subpath-test-configmap-7p2v Feb 11 11:08:26.072: INFO: Deleting pod "pod-subpath-test-configmap-7p2v" in namespace "e2e-tests-subpath-9b27j" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:08:26.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-9b27j" for this suite. Feb 11 11:08:34.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:08:34.244: INFO: namespace: e2e-tests-subpath-9b27j, resource: bindings, ignored listing per whitelist Feb 11 11:08:34.244: INFO: namespace e2e-tests-subpath-9b27j deletion completed in 8.141708109s • [SLOW TEST:43.588 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:08:34.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 11 11:08:34.469: INFO: Waiting up to 5m0s for pod "pod-d8465cfe-4cbe-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-wlsrl" to be "success or failure" Feb 11 11:08:34.601: INFO: Pod "pod-d8465cfe-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 131.678845ms Feb 11 11:08:36.633: INFO: Pod "pod-d8465cfe-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163980872s Feb 11 11:08:38.649: INFO: Pod "pod-d8465cfe-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180118591s Feb 11 11:08:40.732: INFO: Pod "pod-d8465cfe-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263006404s Feb 11 11:08:42.822: INFO: Pod "pod-d8465cfe-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.353006689s Feb 11 11:08:44.854: INFO: Pod "pod-d8465cfe-4cbe-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.384968494s STEP: Saw pod success Feb 11 11:08:44.854: INFO: Pod "pod-d8465cfe-4cbe-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:08:44.860: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d8465cfe-4cbe-11ea-a6e3-0242ac110005 container test-container: STEP: delete the pod Feb 11 11:08:44.913: INFO: Waiting for pod pod-d8465cfe-4cbe-11ea-a6e3-0242ac110005 to disappear Feb 11 11:08:44.921: INFO: Pod pod-d8465cfe-4cbe-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:08:44.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wlsrl" for this suite. Feb 11 11:08:50.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:08:51.148: INFO: namespace: e2e-tests-emptydir-wlsrl, resource: bindings, ignored listing per whitelist Feb 11 11:08:51.259: INFO: namespace e2e-tests-emptydir-wlsrl deletion completed in 6.330103361s • [SLOW TEST:17.014 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:08:51.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 11 11:08:51.481: INFO: Waiting up to 5m0s for pod "downward-api-e26db03a-4cbe-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-fk7lv" to be "success or failure" Feb 11 11:08:51.500: INFO: Pod "downward-api-e26db03a-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.071338ms Feb 11 11:08:53.517: INFO: Pod "downward-api-e26db03a-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035030711s Feb 11 11:08:55.533: INFO: Pod "downward-api-e26db03a-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051005709s Feb 11 11:08:57.772: INFO: Pod "downward-api-e26db03a-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.290537153s Feb 11 11:08:59.790: INFO: Pod "downward-api-e26db03a-4cbe-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.308045846s Feb 11 11:09:01.862: INFO: Pod "downward-api-e26db03a-4cbe-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.380157603s STEP: Saw pod success Feb 11 11:09:01.862: INFO: Pod "downward-api-e26db03a-4cbe-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:09:01.870: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e26db03a-4cbe-11ea-a6e3-0242ac110005 container dapi-container: STEP: delete the pod Feb 11 11:09:02.346: INFO: Waiting for pod downward-api-e26db03a-4cbe-11ea-a6e3-0242ac110005 to disappear Feb 11 11:09:02.359: INFO: Pod downward-api-e26db03a-4cbe-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:09:02.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fk7lv" for this suite. Feb 11 11:09:08.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:09:08.588: INFO: namespace: e2e-tests-downward-api-fk7lv, resource: bindings, ignored listing per whitelist Feb 11 11:09:08.652: INFO: namespace e2e-tests-downward-api-fk7lv deletion completed in 6.281523925s • [SLOW TEST:17.394 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:09:08.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 11 11:09:19.588: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ecceb761-4cbe-11ea-a6e3-0242ac110005" Feb 11 11:09:19.588: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ecceb761-4cbe-11ea-a6e3-0242ac110005" in namespace "e2e-tests-pods-8krkc" to be "terminated due to deadline exceeded" Feb 11 11:09:19.604: INFO: Pod "pod-update-activedeadlineseconds-ecceb761-4cbe-11ea-a6e3-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 15.570606ms Feb 11 11:09:21.621: INFO: Pod "pod-update-activedeadlineseconds-ecceb761-4cbe-11ea-a6e3-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.032089337s Feb 11 11:09:21.621: INFO: Pod "pod-update-activedeadlineseconds-ecceb761-4cbe-11ea-a6e3-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:09:21.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8krkc" for this suite. Feb 11 11:09:28.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:09:29.011: INFO: namespace: e2e-tests-pods-8krkc, resource: bindings, ignored listing per whitelist Feb 11 11:09:29.125: INFO: namespace e2e-tests-pods-8krkc deletion completed in 6.487468254s • [SLOW TEST:20.473 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:09:29.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0211 11:09:39.655219 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 11 11:09:39.655: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:09:39.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-z9m8k" for this suite. Feb 11 11:09:45.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:09:45.919: INFO: namespace: e2e-tests-gc-z9m8k, resource: bindings, ignored listing per whitelist Feb 11 11:09:45.928: INFO: namespace e2e-tests-gc-z9m8k deletion completed in 6.268000257s • [SLOW TEST:16.801 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:09:45.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 11 11:09:46.150: INFO: Waiting up to 5m0s for pod "pod-030405d9-4cbf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-wd6qt" to be "success or failure" Feb 11 11:09:46.173: INFO: Pod "pod-030405d9-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.563436ms Feb 11 11:09:48.240: INFO: Pod "pod-030405d9-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090239155s Feb 11 11:09:50.256: INFO: Pod "pod-030405d9-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106075799s Feb 11 11:09:52.371: INFO: Pod "pod-030405d9-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221026451s Feb 11 11:09:54.911: INFO: Pod "pod-030405d9-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.761229562s Feb 11 11:09:56.924: INFO: Pod "pod-030405d9-4cbf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.773991515s STEP: Saw pod success Feb 11 11:09:56.924: INFO: Pod "pod-030405d9-4cbf-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:09:56.934: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-030405d9-4cbf-11ea-a6e3-0242ac110005 container test-container: STEP: delete the pod Feb 11 11:09:57.718: INFO: Waiting for pod pod-030405d9-4cbf-11ea-a6e3-0242ac110005 to disappear Feb 11 11:09:57.755: INFO: Pod pod-030405d9-4cbf-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:09:57.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wd6qt" for this suite. Feb 11 11:10:03.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:10:04.224: INFO: namespace: e2e-tests-emptydir-wd6qt, resource: bindings, ignored listing per whitelist Feb 11 11:10:04.230: INFO: namespace e2e-tests-emptydir-wd6qt deletion completed in 6.389970455s • [SLOW TEST:18.302 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:10:04.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-lw7xs Feb 11 11:10:14.511: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-lw7xs STEP: checking the pod's current state and verifying that restartCount is present Feb 11 11:10:14.532: INFO: Initial restart count of pod liveness-exec is 0 Feb 11 11:11:11.523: INFO: Restart count of pod e2e-tests-container-probe-lw7xs/liveness-exec is now 1 (56.9906996s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:11:11.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-lw7xs" for this suite. Feb 11 11:11:19.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:11:19.799: INFO: namespace: e2e-tests-container-probe-lw7xs, resource: bindings, ignored listing per whitelist Feb 11 11:11:19.889: INFO: namespace e2e-tests-container-probe-lw7xs deletion completed in 8.287372159s • [SLOW TEST:75.659 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:11:19.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 11 11:11:20.176: INFO: Waiting up to 5m0s for pod "pod-3b0a9424-4cbf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-ptmkd" to be "success or failure" Feb 11 11:11:20.226: INFO: Pod "pod-3b0a9424-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.986916ms Feb 11 11:11:22.287: INFO: Pod "pod-3b0a9424-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110603704s Feb 11 11:11:24.299: INFO: Pod "pod-3b0a9424-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122249444s Feb 11 11:11:26.455: INFO: Pod "pod-3b0a9424-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278673698s Feb 11 11:11:28.512: INFO: Pod "pod-3b0a9424-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.335219257s Feb 11 11:11:30.549: INFO: Pod "pod-3b0a9424-4cbf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.371941316s STEP: Saw pod success Feb 11 11:11:30.549: INFO: Pod "pod-3b0a9424-4cbf-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:11:30.565: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3b0a9424-4cbf-11ea-a6e3-0242ac110005 container test-container: STEP: delete the pod Feb 11 11:11:31.208: INFO: Waiting for pod pod-3b0a9424-4cbf-11ea-a6e3-0242ac110005 to disappear Feb 11 11:11:31.226: INFO: Pod pod-3b0a9424-4cbf-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:11:31.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ptmkd" for this suite. Feb 11 11:11:39.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:11:39.854: INFO: namespace: e2e-tests-emptydir-ptmkd, resource: bindings, ignored listing per whitelist Feb 11 11:11:39.979: INFO: namespace e2e-tests-emptydir-ptmkd deletion completed in 8.738267583s • [SLOW TEST:20.089 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:11:39.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 11 11:11:40.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v62hz' Feb 11 11:11:40.693: INFO: stderr: "" Feb 11 11:11:40.694: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 11 11:11:41.718: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:41.718: INFO: Found 0 / 1 Feb 11 11:11:42.710: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:42.710: INFO: Found 0 / 1 Feb 11 11:11:43.707: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:43.707: INFO: Found 0 / 1 Feb 11 11:11:44.744: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:44.745: INFO: Found 0 / 1 Feb 11 11:11:45.704: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:45.704: INFO: Found 0 / 1 Feb 11 11:11:47.370: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:47.371: INFO: Found 0 / 1 Feb 11 11:11:47.716: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:47.716: INFO: Found 0 / 1 Feb 11 11:11:48.715: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:48.715: INFO: Found 0 / 1 Feb 11 11:11:50.471: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:50.472: INFO: Found 0 / 1 Feb 11 11:11:50.781: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:50.782: INFO: Found 0 / 1 Feb 11 11:11:51.708: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:51.708: INFO: Found 0 / 1 Feb 11 11:11:52.714: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:52.714: INFO: Found 1 / 1 Feb 11 11:11:52.715: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 11 11:11:52.722: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:52.722: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 11 11:11:52.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-v2f6g --namespace=e2e-tests-kubectl-v62hz -p {"metadata":{"annotations":{"x":"y"}}}' Feb 11 11:11:53.013: INFO: stderr: "" Feb 11 11:11:53.013: INFO: stdout: "pod/redis-master-v2f6g patched\n" STEP: checking annotations Feb 11 11:11:53.021: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:11:53.021: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:11:53.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v62hz" for this suite. Feb 11 11:12:17.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:12:17.156: INFO: namespace: e2e-tests-kubectl-v62hz, resource: bindings, ignored listing per whitelist Feb 11 11:12:17.262: INFO: namespace e2e-tests-kubectl-v62hz deletion completed in 24.236750453s • [SLOW TEST:37.283 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:12:17.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:12:27.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-4mb4h" for this suite. Feb 11 11:12:33.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:12:34.140: INFO: namespace: e2e-tests-emptydir-wrapper-4mb4h, resource: bindings, ignored listing per whitelist Feb 11 11:12:34.220: INFO: namespace e2e-tests-emptydir-wrapper-4mb4h deletion completed in 6.467200629s • [SLOW TEST:16.957 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:12:34.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 11 11:12:34.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6759ad09-4cbf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-q9m87" to be "success or failure" Feb 11 11:12:34.626: INFO: Pod "downwardapi-volume-6759ad09-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.534256ms Feb 11 11:12:36.640: INFO: Pod "downwardapi-volume-6759ad09-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062395731s Feb 11 11:12:38.696: INFO: Pod "downwardapi-volume-6759ad09-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118067403s Feb 11 11:12:42.483: INFO: Pod "downwardapi-volume-6759ad09-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.905651929s Feb 11 11:12:44.508: INFO: Pod "downwardapi-volume-6759ad09-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.930003664s Feb 11 11:12:46.537: INFO: Pod "downwardapi-volume-6759ad09-4cbf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.959235941s STEP: Saw pod success Feb 11 11:12:46.537: INFO: Pod "downwardapi-volume-6759ad09-4cbf-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:12:46.550: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6759ad09-4cbf-11ea-a6e3-0242ac110005 container client-container: STEP: delete the pod Feb 11 11:12:46.775: INFO: Waiting for pod downwardapi-volume-6759ad09-4cbf-11ea-a6e3-0242ac110005 to disappear Feb 11 11:12:46.805: INFO: Pod downwardapi-volume-6759ad09-4cbf-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:12:46.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q9m87" for this suite. Feb 11 11:12:52.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:12:52.989: INFO: namespace: e2e-tests-downward-api-q9m87, resource: bindings, ignored listing per whitelist Feb 11 11:12:53.103: INFO: namespace e2e-tests-downward-api-q9m87 deletion completed in 6.258771379s • [SLOW TEST:18.883 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:12:53.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 11 11:12:53.216: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7284eb98-4cbf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-xksqm" to be "success or failure" Feb 11 11:12:53.290: INFO: Pod "downwardapi-volume-7284eb98-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 73.591762ms Feb 11 11:12:55.307: INFO: Pod "downwardapi-volume-7284eb98-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090041606s Feb 11 11:12:57.331: INFO: Pod "downwardapi-volume-7284eb98-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114478178s Feb 11 11:12:59.978: INFO: Pod "downwardapi-volume-7284eb98-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.760945008s Feb 11 11:13:02.164: INFO: Pod "downwardapi-volume-7284eb98-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.947648348s Feb 11 11:13:04.189: INFO: Pod "downwardapi-volume-7284eb98-4cbf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.971875749s STEP: Saw pod success Feb 11 11:13:04.189: INFO: Pod "downwardapi-volume-7284eb98-4cbf-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:13:04.200: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7284eb98-4cbf-11ea-a6e3-0242ac110005 container client-container: STEP: delete the pod Feb 11 11:13:04.521: INFO: Waiting for pod downwardapi-volume-7284eb98-4cbf-11ea-a6e3-0242ac110005 to disappear Feb 11 11:13:04.575: INFO: Pod downwardapi-volume-7284eb98-4cbf-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:13:04.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xksqm" for this suite. Feb 11 11:13:10.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:13:10.809: INFO: namespace: e2e-tests-downward-api-xksqm, resource: bindings, ignored listing per whitelist Feb 11 11:13:10.833: INFO: namespace e2e-tests-downward-api-xksqm deletion completed in 6.230539122s • [SLOW TEST:17.729 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:13:10.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 11 11:13:31.368: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 11 11:13:31.448: INFO: Pod pod-with-poststart-http-hook still exists Feb 11 11:13:33.449: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 11 11:13:33.461: INFO: Pod pod-with-poststart-http-hook still exists Feb 11 11:13:35.449: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 11 11:13:35.471: INFO: Pod pod-with-poststart-http-hook still exists Feb 11 11:13:37.449: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 11 11:13:37.466: INFO: Pod pod-with-poststart-http-hook still exists Feb 11 11:13:39.449: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 11 11:13:39.480: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:13:39.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-hb4bg" for this suite. Feb 11 11:14:03.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:14:03.775: INFO: namespace: e2e-tests-container-lifecycle-hook-hb4bg, resource: bindings, ignored listing per whitelist Feb 11 11:14:03.863: INFO: namespace e2e-tests-container-lifecycle-hook-hb4bg deletion completed in 24.368627385s • [SLOW TEST:53.030 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:14:03.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-9cd0041d-4cbf-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume secrets Feb 11 11:14:04.237: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-bxn4c" to be "success or failure" Feb 11 11:14:04.319: INFO: Pod "pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 81.076993ms Feb 11 11:14:06.666: INFO: Pod "pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428876463s Feb 11 11:14:08.714: INFO: Pod "pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476107895s Feb 11 11:14:10.731: INFO: Pod "pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493693306s Feb 11 11:14:12.799: INFO: Pod "pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561114248s Feb 11 11:14:14.819: INFO: Pod "pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.58109783s Feb 11 11:14:16.928: INFO: Pod "pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.690580924s STEP: Saw pod success Feb 11 11:14:16.929: INFO: Pod "pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:14:16.950: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 11 11:14:17.467: INFO: Waiting for pod pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005 to disappear Feb 11 11:14:17.480: INFO: Pod pod-projected-secrets-9cd58051-4cbf-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:14:17.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bxn4c" for this suite. Feb 11 11:14:23.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:14:23.655: INFO: namespace: e2e-tests-projected-bxn4c, resource: bindings, ignored listing per whitelist Feb 11 11:14:23.740: INFO: namespace e2e-tests-projected-bxn4c deletion completed in 6.247663558s • [SLOW TEST:19.876 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:14:23.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-a8ad6703-4cbf-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 11 11:14:24.131: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a8b0e193-4cbf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-gsb5r" to be "success or failure" Feb 11 11:14:24.198: INFO: Pod "pod-projected-configmaps-a8b0e193-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 67.338886ms Feb 11 11:14:26.233: INFO: Pod "pod-projected-configmaps-a8b0e193-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102211196s Feb 11 11:14:28.256: INFO: Pod "pod-projected-configmaps-a8b0e193-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125298218s Feb 11 11:14:30.281: INFO: Pod "pod-projected-configmaps-a8b0e193-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150228108s Feb 11 11:14:32.296: INFO: Pod "pod-projected-configmaps-a8b0e193-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165039552s Feb 11 11:14:34.316: INFO: Pod "pod-projected-configmaps-a8b0e193-4cbf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.185333997s STEP: Saw pod success Feb 11 11:14:34.317: INFO: Pod "pod-projected-configmaps-a8b0e193-4cbf-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:14:34.325: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a8b0e193-4cbf-11ea-a6e3-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 11 11:14:34.406: INFO: Waiting for pod pod-projected-configmaps-a8b0e193-4cbf-11ea-a6e3-0242ac110005 to disappear Feb 11 11:14:34.419: INFO: Pod pod-projected-configmaps-a8b0e193-4cbf-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:14:34.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gsb5r" for this suite. Feb 11 11:14:40.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:14:40.768: INFO: namespace: e2e-tests-projected-gsb5r, resource: bindings, ignored listing per whitelist Feb 11 11:14:40.865: INFO: namespace e2e-tests-projected-gsb5r deletion completed in 6.433006626s • [SLOW TEST:17.124 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:14:40.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0211 11:15:11.694826 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 11 11:15:11.695: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:15:11.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-fp5j7" for this suite. Feb 11 11:15:21.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:15:21.813: INFO: namespace: e2e-tests-gc-fp5j7, resource: bindings, ignored listing per whitelist Feb 11 11:15:21.861: INFO: namespace e2e-tests-gc-fp5j7 deletion completed in 10.160743891s • [SLOW TEST:40.996 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:15:21.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 11 11:15:22.692: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb9cab22-4cbf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-gqztc" to be "success or failure" Feb 11 11:15:22.844: INFO: Pod "downwardapi-volume-cb9cab22-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 151.547733ms Feb 11 11:15:24.868: INFO: Pod "downwardapi-volume-cb9cab22-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175815653s Feb 11 11:15:26.911: INFO: Pod "downwardapi-volume-cb9cab22-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218700234s Feb 11 11:15:28.936: INFO: Pod "downwardapi-volume-cb9cab22-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.243902899s Feb 11 11:15:30.965: INFO: Pod "downwardapi-volume-cb9cab22-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.272410499s Feb 11 11:15:33.466: INFO: Pod "downwardapi-volume-cb9cab22-4cbf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.773456474s STEP: Saw pod success Feb 11 11:15:33.466: INFO: Pod "downwardapi-volume-cb9cab22-4cbf-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:15:33.476: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cb9cab22-4cbf-11ea-a6e3-0242ac110005 container client-container: STEP: delete the pod Feb 11 11:15:33.938: INFO: Waiting for pod downwardapi-volume-cb9cab22-4cbf-11ea-a6e3-0242ac110005 to disappear Feb 11 11:15:33.947: INFO: Pod downwardapi-volume-cb9cab22-4cbf-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:15:33.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gqztc" for this suite. Feb 11 11:15:40.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:15:40.073: INFO: namespace: e2e-tests-projected-gqztc, resource: bindings, ignored listing per whitelist Feb 11 11:15:40.319: INFO: namespace e2e-tests-projected-gqztc deletion completed in 6.364540401s • [SLOW TEST:18.458 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:15:40.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d6503b4b-4cbf-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 11 11:15:40.698: INFO: Waiting up to 5m0s for pod "pod-configmaps-d651a523-4cbf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-configmap-hwn29" to be "success or failure" Feb 11 11:15:40.720: INFO: Pod "pod-configmaps-d651a523-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.696857ms Feb 11 11:15:42.735: INFO: Pod "pod-configmaps-d651a523-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036193922s Feb 11 11:15:44.746: INFO: Pod "pod-configmaps-d651a523-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047828371s Feb 11 11:15:46.949: INFO: Pod "pod-configmaps-d651a523-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250334585s Feb 11 11:15:50.276: INFO: Pod "pod-configmaps-d651a523-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.577816576s Feb 11 11:15:52.296: INFO: Pod "pod-configmaps-d651a523-4cbf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.597348722s STEP: Saw pod success Feb 11 11:15:52.296: INFO: Pod "pod-configmaps-d651a523-4cbf-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:15:52.303: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d651a523-4cbf-11ea-a6e3-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 11 11:15:53.234: INFO: Waiting for pod pod-configmaps-d651a523-4cbf-11ea-a6e3-0242ac110005 to disappear Feb 11 11:15:53.313: INFO: Pod pod-configmaps-d651a523-4cbf-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:15:53.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hwn29" for this suite. Feb 11 11:15:59.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:15:59.555: INFO: namespace: e2e-tests-configmap-hwn29, resource: bindings, ignored listing per whitelist Feb 11 11:15:59.616: INFO: namespace e2e-tests-configmap-hwn29 deletion completed in 6.294088884s • [SLOW TEST:19.297 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:15:59.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 11 11:15:59.849: INFO: Waiting up to 5m0s for pod "pod-e1c2063c-4cbf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-pj7pq" to be "success or failure" Feb 11 11:15:59.870: INFO: Pod "pod-e1c2063c-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.660582ms Feb 11 11:16:02.207: INFO: Pod "pod-e1c2063c-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.357784525s Feb 11 11:16:04.233: INFO: Pod "pod-e1c2063c-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.383750491s Feb 11 11:16:06.412: INFO: Pod "pod-e1c2063c-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.562111979s Feb 11 11:16:08.428: INFO: Pod "pod-e1c2063c-4cbf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.578470947s Feb 11 11:16:10.471: INFO: Pod "pod-e1c2063c-4cbf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.62177121s STEP: Saw pod success Feb 11 11:16:10.472: INFO: Pod "pod-e1c2063c-4cbf-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:16:10.503: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e1c2063c-4cbf-11ea-a6e3-0242ac110005 container test-container: STEP: delete the pod Feb 11 11:16:10.692: INFO: Waiting for pod pod-e1c2063c-4cbf-11ea-a6e3-0242ac110005 to disappear Feb 11 11:16:10.705: INFO: Pod pod-e1c2063c-4cbf-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:16:10.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pj7pq" for this suite. Feb 11 11:16:16.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:16:16.981: INFO: namespace: e2e-tests-emptydir-pj7pq, resource: bindings, ignored listing per whitelist Feb 11 11:16:17.017: INFO: namespace e2e-tests-emptydir-pj7pq deletion completed in 6.245892638s • [SLOW TEST:17.400 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:16:17.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 11 11:16:17.218: INFO: PodSpec: initContainers in spec.initContainers Feb 11 11:17:36.036: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ec1fbdb9-4cbf-11ea-a6e3-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-nh5p8", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-nh5p8/pods/pod-init-ec1fbdb9-4cbf-11ea-a6e3-0242ac110005", UID:"ec20d3ba-4cbf-11ea-a994-fa163e34d433", ResourceVersion:"21300416", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717016577, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"218938732"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-n26gx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000277d40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n26gx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n26gx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n26gx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ed2758), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ae5860), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ed28a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ed28c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000ed28c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000ed28cc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717016577, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717016577, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717016577, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717016577, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00127ae00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00043c380)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00043c460)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://c54c748be14d187028c0a35850373dde0300c5f8b0c839cfc749fcd1d4d543f3"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00127ae40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00127ae20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:17:36.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-nh5p8" for this suite. Feb 11 11:18:00.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:18:00.345: INFO: namespace: e2e-tests-init-container-nh5p8, resource: bindings, ignored listing per whitelist Feb 11 11:18:00.371: INFO: namespace e2e-tests-init-container-nh5p8 deletion completed in 24.232436887s • [SLOW TEST:103.353 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:18:00.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 11 11:18:13.918: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:18:15.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-x7qgv" for this suite. Feb 11 11:18:42.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:18:42.372: INFO: namespace: e2e-tests-replicaset-x7qgv, resource: bindings, ignored listing per whitelist Feb 11 11:18:42.407: INFO: namespace e2e-tests-replicaset-x7qgv deletion completed in 26.572949781s • [SLOW TEST:42.036 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:18:42.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-z4lnf/configmap-test-4303157c-4cc0-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 11 11:18:43.014: INFO: Waiting up to 5m0s for pod "pod-configmaps-4303fd97-4cc0-11ea-a6e3-0242ac110005" in namespace "e2e-tests-configmap-z4lnf" to be "success or failure" Feb 11 11:18:43.024: INFO: Pod "pod-configmaps-4303fd97-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.717628ms Feb 11 11:18:45.063: INFO: Pod "pod-configmaps-4303fd97-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048880648s Feb 11 11:18:47.104: INFO: Pod "pod-configmaps-4303fd97-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089879668s Feb 11 11:18:49.348: INFO: Pod "pod-configmaps-4303fd97-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.333058886s Feb 11 11:18:51.386: INFO: Pod "pod-configmaps-4303fd97-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.371436322s Feb 11 11:18:53.412: INFO: Pod "pod-configmaps-4303fd97-4cc0-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.397189755s STEP: Saw pod success Feb 11 11:18:53.412: INFO: Pod "pod-configmaps-4303fd97-4cc0-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:18:53.422: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4303fd97-4cc0-11ea-a6e3-0242ac110005 container env-test: STEP: delete the pod Feb 11 11:18:54.431: INFO: Waiting for pod pod-configmaps-4303fd97-4cc0-11ea-a6e3-0242ac110005 to disappear Feb 11 11:18:54.673: INFO: Pod pod-configmaps-4303fd97-4cc0-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:18:54.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-z4lnf" for this suite. Feb 11 11:19:00.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:19:00.969: INFO: namespace: e2e-tests-configmap-z4lnf, resource: bindings, ignored listing per whitelist Feb 11 11:19:01.025: INFO: namespace e2e-tests-configmap-z4lnf deletion completed in 6.325410925s • [SLOW TEST:18.618 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:19:01.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 11 11:19:01.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-zc7lh,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc7lh/configmaps/e2e-watch-test-resource-version,UID:4ddbc903-4cc0-11ea-a994-fa163e34d433,ResourceVersion:21300609,Generation:0,CreationTimestamp:2020-02-11 11:19:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 11 11:19:01.263: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-zc7lh,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc7lh/configmaps/e2e-watch-test-resource-version,UID:4ddbc903-4cc0-11ea-a994-fa163e34d433,ResourceVersion:21300610,Generation:0,CreationTimestamp:2020-02-11 11:19:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:19:01.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-zc7lh" for this suite. Feb 11 11:19:07.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:19:07.478: INFO: namespace: e2e-tests-watch-zc7lh, resource: bindings, ignored listing per whitelist Feb 11 11:19:07.514: INFO: namespace e2e-tests-watch-zc7lh deletion completed in 6.246468108s • [SLOW TEST:6.489 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:19:07.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-51bd9561-4cc0-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 11 11:19:07.740: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-51bf1860-4cc0-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-dx8gd" to be "success or failure" Feb 11 11:19:07.862: INFO: Pod "pod-projected-configmaps-51bf1860-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 121.869413ms Feb 11 11:19:09.877: INFO: Pod "pod-projected-configmaps-51bf1860-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136955516s Feb 11 11:19:11.903: INFO: Pod "pod-projected-configmaps-51bf1860-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162862856s Feb 11 11:19:13.927: INFO: Pod "pod-projected-configmaps-51bf1860-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186655572s Feb 11 11:19:15.957: INFO: Pod "pod-projected-configmaps-51bf1860-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21666424s Feb 11 11:19:18.012: INFO: Pod "pod-projected-configmaps-51bf1860-4cc0-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.271734366s STEP: Saw pod success Feb 11 11:19:18.012: INFO: Pod "pod-projected-configmaps-51bf1860-4cc0-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:19:18.017: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-51bf1860-4cc0-11ea-a6e3-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 11 11:19:18.404: INFO: Waiting for pod pod-projected-configmaps-51bf1860-4cc0-11ea-a6e3-0242ac110005 to disappear Feb 11 11:19:18.416: INFO: Pod pod-projected-configmaps-51bf1860-4cc0-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:19:18.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dx8gd" for this suite. Feb 11 11:19:26.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:19:26.703: INFO: namespace: e2e-tests-projected-dx8gd, resource: bindings, ignored listing per whitelist Feb 11 11:19:26.731: INFO: namespace e2e-tests-projected-dx8gd deletion completed in 8.306848224s • [SLOW TEST:19.217 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:19:26.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 11 11:19:26.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9ztrf' Feb 11 11:19:28.864: INFO: stderr: "" Feb 11 11:19:28.864: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 11 11:19:38.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9ztrf -o json' Feb 11 11:19:39.107: INFO: stderr: "" Feb 11 11:19:39.107: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-11T11:19:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-9ztrf\",\n \"resourceVersion\": \"21300697\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-9ztrf/pods/e2e-test-nginx-pod\",\n \"uid\": \"5e50ff45-4cc0-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-t6vzj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-t6vzj\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-t6vzj\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-11T11:19:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-11T11:19:38Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-11T11:19:38Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-11T11:19:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://062ee5a63077a9950b1c25b704a3bc5c79aa3932b9bb7ddc6e90ac90473ce5dd\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-11T11:19:37Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-11T11:19:28Z\"\n }\n}\n" STEP: replace the image in the pod Feb 11 11:19:39.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-9ztrf' Feb 11 11:19:39.633: INFO: stderr: "" Feb 11 11:19:39.633: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Feb 11 11:19:39.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9ztrf' Feb 11 11:19:52.595: INFO: stderr: "" Feb 11 11:19:52.595: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:19:52.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9ztrf" for this suite. Feb 11 11:19:58.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:19:58.813: INFO: namespace: e2e-tests-kubectl-9ztrf, resource: bindings, ignored listing per whitelist Feb 11 11:19:58.867: INFO: namespace e2e-tests-kubectl-9ztrf deletion completed in 6.222438951s • [SLOW TEST:32.136 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:19:58.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-70671072-4cc0-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume secrets Feb 11 11:19:59.169: INFO: Waiting up to 5m0s for pod "pod-secrets-70685416-4cc0-11ea-a6e3-0242ac110005" in namespace "e2e-tests-secrets-b2l2h" to be "success or failure" Feb 11 11:19:59.187: INFO: Pod "pod-secrets-70685416-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.184767ms Feb 11 11:20:01.338: INFO: Pod "pod-secrets-70685416-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168822715s Feb 11 11:20:03.355: INFO: Pod "pod-secrets-70685416-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185553212s Feb 11 11:20:05.808: INFO: Pod "pod-secrets-70685416-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.639012274s Feb 11 11:20:07.838: INFO: Pod "pod-secrets-70685416-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.668670419s Feb 11 11:20:09.861: INFO: Pod "pod-secrets-70685416-4cc0-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.691882487s STEP: Saw pod success Feb 11 11:20:09.861: INFO: Pod "pod-secrets-70685416-4cc0-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:20:09.869: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-70685416-4cc0-11ea-a6e3-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 11 11:20:10.003: INFO: Waiting for pod pod-secrets-70685416-4cc0-11ea-a6e3-0242ac110005 to disappear Feb 11 11:20:10.033: INFO: Pod pod-secrets-70685416-4cc0-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:20:10.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-b2l2h" for this suite. Feb 11 11:20:17.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:20:17.300: INFO: namespace: e2e-tests-secrets-b2l2h, resource: bindings, ignored listing per whitelist Feb 11 11:20:17.439: INFO: namespace e2e-tests-secrets-b2l2h deletion completed in 7.367683918s • [SLOW TEST:18.572 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:20:17.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-7b709cf5-4cc0-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 11 11:20:17.685: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7b71ca15-4cc0-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-9dkft" to be "success or failure" Feb 11 11:20:17.697: INFO: Pod "pod-projected-configmaps-7b71ca15-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.255928ms Feb 11 11:20:19.726: INFO: Pod "pod-projected-configmaps-7b71ca15-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04128763s Feb 11 11:20:21.792: INFO: Pod "pod-projected-configmaps-7b71ca15-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107290871s Feb 11 11:20:23.819: INFO: Pod "pod-projected-configmaps-7b71ca15-4cc0-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133672735s Feb 11 11:20:25.843: INFO: Pod "pod-projected-configmaps-7b71ca15-4cc0-11ea-a6e3-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.158021702s Feb 11 11:20:27.865: INFO: Pod "pod-projected-configmaps-7b71ca15-4cc0-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.180032427s STEP: Saw pod success Feb 11 11:20:27.865: INFO: Pod "pod-projected-configmaps-7b71ca15-4cc0-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:20:27.875: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-7b71ca15-4cc0-11ea-a6e3-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 11 11:20:28.209: INFO: Waiting for pod pod-projected-configmaps-7b71ca15-4cc0-11ea-a6e3-0242ac110005 to disappear Feb 11 11:20:28.408: INFO: Pod pod-projected-configmaps-7b71ca15-4cc0-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:20:28.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9dkft" for this suite. Feb 11 11:20:34.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:20:34.664: INFO: namespace: e2e-tests-projected-9dkft, resource: bindings, ignored listing per whitelist Feb 11 11:20:34.678: INFO: namespace e2e-tests-projected-9dkft deletion completed in 6.253081194s • [SLOW TEST:17.238 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:20:34.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 11 11:20:35.021: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"85b90bf8-4cc0-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00156a902), BlockOwnerDeletion:(*bool)(0xc00156a903)}} Feb 11 11:20:35.085: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"85b4f96a-4cc0-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00239aea2), BlockOwnerDeletion:(*bool)(0xc00239aea3)}} Feb 11 11:20:35.279: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"85b71e23-4cc0-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00156adb2), BlockOwnerDeletion:(*bool)(0xc00156adb3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:20:40.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-hkztt" for this suite. Feb 11 11:20:46.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:20:46.607: INFO: namespace: e2e-tests-gc-hkztt, resource: bindings, ignored listing per whitelist Feb 11 11:20:46.695: INFO: namespace e2e-tests-gc-hkztt deletion completed in 6.34034346s • [SLOW TEST:12.017 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:20:46.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-8cd191c5-4cc0-11ea-a6e3-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-8cd191c5-4cc0-11ea-a6e3-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:21:01.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bxrvh" for this suite. Feb 11 11:21:27.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:21:27.370: INFO: namespace: e2e-tests-configmap-bxrvh, resource: bindings, ignored listing per whitelist Feb 11 11:21:27.483: INFO: namespace e2e-tests-configmap-bxrvh deletion completed in 26.379513241s • [SLOW TEST:40.788 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:21:27.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 11 11:21:27.754: INFO: Number of nodes with available pods: 0 Feb 11 11:21:27.754: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:28.792: INFO: Number of nodes with available pods: 0 Feb 11 11:21:28.792: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:29.831: INFO: Number of nodes with available pods: 0 Feb 11 11:21:29.831: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:30.963: INFO: Number of nodes with available pods: 0 Feb 11 11:21:30.963: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:31.805: INFO: Number of nodes with available pods: 0 Feb 11 11:21:31.805: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:32.774: INFO: Number of nodes with available pods: 0 Feb 11 11:21:32.774: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:34.100: INFO: Number of nodes with available pods: 0 Feb 11 11:21:34.100: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:34.789: INFO: Number of nodes with available pods: 0 Feb 11 11:21:34.789: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:35.798: INFO: Number of nodes with available pods: 0 Feb 11 11:21:35.798: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:36.779: INFO: Number of nodes with available pods: 0 Feb 11 11:21:36.779: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:37.776: INFO: Number of nodes with available pods: 1 Feb 11 11:21:37.776: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 11 11:21:37.883: INFO: Number of nodes with available pods: 0 Feb 11 11:21:37.883: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:38.912: INFO: Number of nodes with available pods: 0 Feb 11 11:21:38.913: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:39.938: INFO: Number of nodes with available pods: 0 Feb 11 11:21:39.938: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:41.139: INFO: Number of nodes with available pods: 0 Feb 11 11:21:41.140: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:41.907: INFO: Number of nodes with available pods: 0 Feb 11 11:21:41.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:42.918: INFO: Number of nodes with available pods: 0 Feb 11 11:21:42.918: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:43.917: INFO: Number of nodes with available pods: 0 Feb 11 11:21:43.917: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:44.931: INFO: Number of nodes with available pods: 0 Feb 11 11:21:44.931: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:45.904: INFO: Number of nodes with available pods: 0 Feb 11 11:21:45.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:47.090: INFO: Number of nodes with available pods: 0 Feb 11 11:21:47.090: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:47.913: INFO: Number of nodes with available pods: 0 Feb 11 11:21:47.913: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:48.936: INFO: Number of nodes with available pods: 0 Feb 11 11:21:48.937: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:49.903: INFO: Number of nodes with available pods: 0 Feb 11 11:21:49.903: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:51.267: INFO: Number of nodes with available pods: 0 Feb 11 11:21:51.267: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:52.022: INFO: Number of nodes with available pods: 0 Feb 11 11:21:52.022: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:52.929: INFO: Number of nodes with available pods: 0 Feb 11 11:21:52.929: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:53.917: INFO: Number of nodes with available pods: 0 Feb 11 11:21:53.917: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 11 11:21:54.915: INFO: Number of nodes with available pods: 1 Feb 11 11:21:54.915: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-pfgnc, will wait for the garbage collector to delete the pods Feb 11 11:21:55.008: INFO: Deleting DaemonSet.extensions daemon-set took: 22.289183ms Feb 11 11:21:55.209: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.818907ms Feb 11 11:22:12.718: INFO: Number of nodes with available pods: 0 Feb 11 11:22:12.718: INFO: Number of running nodes: 0, number of available pods: 0 Feb 11 11:22:12.723: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-pfgnc/daemonsets","resourceVersion":"21301060"},"items":null} Feb 11 11:22:12.726: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-pfgnc/pods","resourceVersion":"21301060"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:22:12.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-pfgnc" for this suite. Feb 11 11:22:20.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:22:20.902: INFO: namespace: e2e-tests-daemonsets-pfgnc, resource: bindings, ignored listing per whitelist Feb 11 11:22:20.948: INFO: namespace e2e-tests-daemonsets-pfgnc deletion completed in 8.210156277s • [SLOW TEST:53.463 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:22:20.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wsxf6 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wsxf6;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wsxf6 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wsxf6;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wsxf6.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wsxf6.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wsxf6.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wsxf6.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wsxf6.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 84.240.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.240.84_udp@PTR;check="$$(dig +tcp +noall +answer +search 84.240.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.240.84_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wsxf6 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wsxf6;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wsxf6 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wsxf6;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wsxf6.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wsxf6.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wsxf6.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wsxf6.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wsxf6.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 84.240.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.240.84_udp@PTR;check="$$(dig +tcp +noall +answer +search 84.240.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.240.84_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 11 11:22:37.444: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.452: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.458: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-wsxf6 from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.466: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-wsxf6 from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.474: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.480: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.486: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.492: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.498: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.507: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.513: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.518: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.524: INFO: Unable to read 10.110.240.84_udp@PTR from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.528: INFO: Unable to read 10.110.240.84_tcp@PTR from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.533: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.539: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.546: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wsxf6 from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.552: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wsxf6 from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.558: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.566: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.572: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.578: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.584: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.594: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.606: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.610: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.615: INFO: Unable to read 10.110.240.84_udp@PTR from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.620: INFO: Unable to read 10.110.240.84_tcp@PTR from pod e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005) Feb 11 11:22:37.620: INFO: Lookups using e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-wsxf6 wheezy_tcp@dns-test-service.e2e-tests-dns-wsxf6 wheezy_udp@dns-test-service.e2e-tests-dns-wsxf6.svc wheezy_tcp@dns-test-service.e2e-tests-dns-wsxf6.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.110.240.84_udp@PTR 10.110.240.84_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wsxf6 jessie_tcp@dns-test-service.e2e-tests-dns-wsxf6 jessie_udp@dns-test-service.e2e-tests-dns-wsxf6.svc jessie_tcp@dns-test-service.e2e-tests-dns-wsxf6.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wsxf6.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wsxf6.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.110.240.84_udp@PTR 10.110.240.84_tcp@PTR] Feb 11 11:22:42.785: INFO: DNS probes using e2e-tests-dns-wsxf6/dns-test-c518fbb5-4cc0-11ea-a6e3-0242ac110005 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:22:43.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-wsxf6" for this suite. Feb 11 11:22:51.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:22:51.454: INFO: namespace: e2e-tests-dns-wsxf6, resource: bindings, ignored listing per whitelist Feb 11 11:22:51.496: INFO: namespace e2e-tests-dns-wsxf6 deletion completed in 8.195911133s • [SLOW TEST:30.548 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:22:51.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Feb 11 11:22:51.746: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix586902795/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:22:51.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mw5vr" for this suite. Feb 11 11:22:57.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:22:58.023: INFO: namespace: e2e-tests-kubectl-mw5vr, resource: bindings, ignored listing per whitelist Feb 11 11:22:58.136: INFO: namespace e2e-tests-kubectl-mw5vr deletion completed in 6.208008894s • [SLOW TEST:6.640 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:22:58.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 11 11:22:58.310: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zwz76,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwz76/configmaps/e2e-watch-test-watch-closed,UID:db2b4525-4cc0-11ea-a994-fa163e34d433,ResourceVersion:21301194,Generation:0,CreationTimestamp:2020-02-11 11:22:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 11 11:22:58.311: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zwz76,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwz76/configmaps/e2e-watch-test-watch-closed,UID:db2b4525-4cc0-11ea-a994-fa163e34d433,ResourceVersion:21301195,Generation:0,CreationTimestamp:2020-02-11 11:22:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 11 11:22:58.348: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zwz76,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwz76/configmaps/e2e-watch-test-watch-closed,UID:db2b4525-4cc0-11ea-a994-fa163e34d433,ResourceVersion:21301196,Generation:0,CreationTimestamp:2020-02-11 11:22:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 11 11:22:58.349: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zwz76,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwz76/configmaps/e2e-watch-test-watch-closed,UID:db2b4525-4cc0-11ea-a994-fa163e34d433,ResourceVersion:21301197,Generation:0,CreationTimestamp:2020-02-11 11:22:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:22:58.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-zwz76" for this suite. Feb 11 11:23:04.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:23:04.564: INFO: namespace: e2e-tests-watch-zwz76, resource: bindings, ignored listing per whitelist Feb 11 11:23:04.584: INFO: namespace e2e-tests-watch-zwz76 deletion completed in 6.229314753s • [SLOW TEST:6.448 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:23:04.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-fq7kx STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 11 11:23:04.884: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 11 11:23:39.308: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-fq7kx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:23:39.309: INFO: >>> kubeConfig: /root/.kube/config I0211 11:23:39.407573 9 log.go:172] (0xc0016bc2c0) (0xc0015310e0) Create stream I0211 11:23:39.407839 9 log.go:172] (0xc0016bc2c0) (0xc0015310e0) Stream added, broadcasting: 1 I0211 11:23:39.417590 9 log.go:172] (0xc0016bc2c0) Reply frame received for 1 I0211 11:23:39.417639 9 log.go:172] (0xc0016bc2c0) (0xc000a1c820) Create stream I0211 11:23:39.417655 9 log.go:172] (0xc0016bc2c0) (0xc000a1c820) Stream added, broadcasting: 3 I0211 11:23:39.419946 9 log.go:172] (0xc0016bc2c0) Reply frame received for 3 I0211 11:23:39.419980 9 log.go:172] (0xc0016bc2c0) (0xc000a1c8c0) Create stream I0211 11:23:39.420018 9 log.go:172] (0xc0016bc2c0) (0xc000a1c8c0) Stream added, broadcasting: 5 I0211 11:23:39.421460 9 log.go:172] (0xc0016bc2c0) Reply frame received for 5 I0211 11:23:40.698893 9 log.go:172] (0xc0016bc2c0) Data frame received for 3 I0211 11:23:40.699113 9 log.go:172] (0xc000a1c820) (3) Data frame handling I0211 11:23:40.699143 9 log.go:172] (0xc000a1c820) (3) Data frame sent I0211 11:23:41.032453 9 log.go:172] (0xc0016bc2c0) Data frame received for 1 I0211 11:23:41.032653 9 log.go:172] (0xc0016bc2c0) (0xc000a1c820) Stream removed, broadcasting: 3 I0211 11:23:41.032746 9 log.go:172] (0xc0015310e0) (1) Data frame handling I0211 11:23:41.032784 9 log.go:172] (0xc0015310e0) (1) Data frame sent I0211 11:23:41.033251 9 log.go:172] (0xc0016bc2c0) (0xc0015310e0) Stream removed, broadcasting: 1 I0211 11:23:41.033865 9 log.go:172] (0xc0016bc2c0) (0xc000a1c8c0) Stream removed, broadcasting: 5 I0211 11:23:41.033994 9 log.go:172] (0xc0016bc2c0) Go away received I0211 11:23:41.034596 9 log.go:172] (0xc0016bc2c0) (0xc0015310e0) Stream removed, broadcasting: 1 I0211 11:23:41.034651 9 log.go:172] (0xc0016bc2c0) (0xc000a1c820) Stream removed, broadcasting: 3 I0211 11:23:41.034674 9 log.go:172] (0xc0016bc2c0) (0xc000a1c8c0) Stream removed, broadcasting: 5 Feb 11 11:23:41.034: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:23:41.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-fq7kx" for this suite. Feb 11 11:24:07.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:24:07.267: INFO: namespace: e2e-tests-pod-network-test-fq7kx, resource: bindings, ignored listing per whitelist Feb 11 11:24:07.446: INFO: namespace e2e-tests-pod-network-test-fq7kx deletion completed in 26.377577546s • [SLOW TEST:62.861 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:24:07.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Feb 11 11:24:07.681: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-28n2m" to be "success or failure" Feb 11 11:24:07.693: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.156663ms Feb 11 11:24:09.760: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079006036s Feb 11 11:24:11.776: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094527123s Feb 11 11:24:13.804: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122789426s Feb 11 11:24:16.755: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.074140537s Feb 11 11:24:18.825: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.143610623s Feb 11 11:24:20.846: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.164508401s STEP: Saw pod success Feb 11 11:24:20.846: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 11 11:24:20.851: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 11 11:24:21.343: INFO: Waiting for pod pod-host-path-test to disappear Feb 11 11:24:21.353: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:24:21.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-28n2m" for this suite. Feb 11 11:24:27.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:24:27.627: INFO: namespace: e2e-tests-hostpath-28n2m, resource: bindings, ignored listing per whitelist Feb 11 11:24:27.672: INFO: namespace e2e-tests-hostpath-28n2m deletion completed in 6.311248857s • [SLOW TEST:20.226 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:24:27.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-1094a68b-4cc1-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 11 11:24:27.900: INFO: Waiting up to 5m0s for pod "pod-configmaps-109596ea-4cc1-11ea-a6e3-0242ac110005" in namespace "e2e-tests-configmap-qctfb" to be "success or failure" Feb 11 11:24:27.921: INFO: Pod "pod-configmaps-109596ea-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.314166ms Feb 11 11:24:30.061: INFO: Pod "pod-configmaps-109596ea-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161554905s Feb 11 11:24:32.077: INFO: Pod "pod-configmaps-109596ea-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176908961s Feb 11 11:24:34.132: INFO: Pod "pod-configmaps-109596ea-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232178299s Feb 11 11:24:36.692: INFO: Pod "pod-configmaps-109596ea-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.792270893s Feb 11 11:24:38.709: INFO: Pod "pod-configmaps-109596ea-4cc1-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.808836615s STEP: Saw pod success Feb 11 11:24:38.709: INFO: Pod "pod-configmaps-109596ea-4cc1-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:24:38.715: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-109596ea-4cc1-11ea-a6e3-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 11 11:24:38.880: INFO: Waiting for pod pod-configmaps-109596ea-4cc1-11ea-a6e3-0242ac110005 to disappear Feb 11 11:24:38.956: INFO: Pod pod-configmaps-109596ea-4cc1-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:24:38.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qctfb" for this suite. Feb 11 11:24:45.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:24:45.197: INFO: namespace: e2e-tests-configmap-qctfb, resource: bindings, ignored listing per whitelist Feb 11 11:24:45.230: INFO: namespace e2e-tests-configmap-qctfb deletion completed in 6.265684735s • [SLOW TEST:17.557 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:24:45.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Feb 11 11:24:45.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 11 11:24:45.637: INFO: stderr: "" Feb 11 11:24:45.637: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:24:45.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-l4qqc" for this suite. Feb 11 11:24:51.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:24:51.866: INFO: namespace: e2e-tests-kubectl-l4qqc, resource: bindings, ignored listing per whitelist Feb 11 11:24:51.987: INFO: namespace e2e-tests-kubectl-l4qqc deletion completed in 6.338498748s • [SLOW TEST:6.757 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:24:51.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 11 11:24:52.124: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f069f9a-4cc1-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-chpn9" to be "success or failure" Feb 11 11:24:52.142: INFO: Pod "downwardapi-volume-1f069f9a-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.861778ms Feb 11 11:24:54.159: INFO: Pod "downwardapi-volume-1f069f9a-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034129126s Feb 11 11:24:56.252: INFO: Pod "downwardapi-volume-1f069f9a-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126973721s Feb 11 11:24:58.269: INFO: Pod "downwardapi-volume-1f069f9a-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144657722s Feb 11 11:25:00.403: INFO: Pod "downwardapi-volume-1f069f9a-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278326201s Feb 11 11:25:02.420: INFO: Pod "downwardapi-volume-1f069f9a-4cc1-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.29504425s STEP: Saw pod success Feb 11 11:25:02.420: INFO: Pod "downwardapi-volume-1f069f9a-4cc1-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:25:02.429: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1f069f9a-4cc1-11ea-a6e3-0242ac110005 container client-container: STEP: delete the pod Feb 11 11:25:02.685: INFO: Waiting for pod downwardapi-volume-1f069f9a-4cc1-11ea-a6e3-0242ac110005 to disappear Feb 11 11:25:02.693: INFO: Pod downwardapi-volume-1f069f9a-4cc1-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:25:02.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-chpn9" for this suite. Feb 11 11:25:08.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:25:08.980: INFO: namespace: e2e-tests-projected-chpn9, resource: bindings, ignored listing per whitelist Feb 11 11:25:09.028: INFO: namespace e2e-tests-projected-chpn9 deletion completed in 6.328911867s • [SLOW TEST:17.040 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:25:09.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 11 11:25:09.264: INFO: Waiting up to 5m0s for pod "downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-c7xlg" to be "success or failure" Feb 11 11:25:09.287: INFO: Pod "downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.488687ms Feb 11 11:25:11.508: INFO: Pod "downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243705438s Feb 11 11:25:13.527: INFO: Pod "downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263092799s Feb 11 11:25:15.540: INFO: Pod "downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27632999s Feb 11 11:25:17.578: INFO: Pod "downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31369226s Feb 11 11:25:19.883: INFO: Pod "downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.618875337s Feb 11 11:25:21.899: INFO: Pod "downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.63499435s STEP: Saw pod success Feb 11 11:25:21.899: INFO: Pod "downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:25:21.908: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005 container client-container: STEP: delete the pod Feb 11 11:25:22.011: INFO: Waiting for pod downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005 to disappear Feb 11 11:25:22.096: INFO: Pod downwardapi-volume-293087b9-4cc1-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:25:22.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c7xlg" for this suite. Feb 11 11:25:28.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:25:28.256: INFO: namespace: e2e-tests-downward-api-c7xlg, resource: bindings, ignored listing per whitelist Feb 11 11:25:28.395: INFO: namespace e2e-tests-downward-api-c7xlg deletion completed in 6.286464617s • [SLOW TEST:19.367 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:25:28.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 11 11:25:39.493: INFO: Successfully updated pod "labelsupdate34de7be1-4cc1-11ea-a6e3-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:25:41.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5j5br" for this suite. Feb 11 11:26:05.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:26:05.906: INFO: namespace: e2e-tests-downward-api-5j5br, resource: bindings, ignored listing per whitelist Feb 11 11:26:05.990: INFO: namespace e2e-tests-downward-api-5j5br deletion completed in 24.23857477s • [SLOW TEST:37.594 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:26:05.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0211 11:26:19.691358 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 11 11:26:19.691: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:26:19.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-tkxdv" for this suite. Feb 11 11:26:44.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:26:44.111: INFO: namespace: e2e-tests-gc-tkxdv, resource: bindings, ignored listing per whitelist Feb 11 11:26:44.208: INFO: namespace e2e-tests-gc-tkxdv deletion completed in 22.849651035s • [SLOW TEST:38.217 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:26:44.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Feb 11 11:26:45.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:26:47.017: INFO: stderr: "" Feb 11 11:26:47.018: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 11 11:26:47.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:26:47.781: INFO: stderr: "" Feb 11 11:26:47.782: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Feb 11 11:26:52.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:26:52.982: INFO: stderr: "" Feb 11 11:26:52.983: INFO: stdout: "update-demo-nautilus-gz8r4 update-demo-nautilus-pw5lr " Feb 11 11:26:52.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gz8r4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:26:53.216: INFO: stderr: "" Feb 11 11:26:53.216: INFO: stdout: "" Feb 11 11:26:53.216: INFO: update-demo-nautilus-gz8r4 is created but not running Feb 11 11:26:58.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:26:58.352: INFO: stderr: "" Feb 11 11:26:58.352: INFO: stdout: "update-demo-nautilus-gz8r4 update-demo-nautilus-pw5lr " Feb 11 11:26:58.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gz8r4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:26:58.510: INFO: stderr: "" Feb 11 11:26:58.510: INFO: stdout: "" Feb 11 11:26:58.510: INFO: update-demo-nautilus-gz8r4 is created but not running Feb 11 11:27:03.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:03.634: INFO: stderr: "" Feb 11 11:27:03.634: INFO: stdout: "update-demo-nautilus-gz8r4 update-demo-nautilus-pw5lr " Feb 11 11:27:03.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gz8r4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:03.805: INFO: stderr: "" Feb 11 11:27:03.805: INFO: stdout: "" Feb 11 11:27:03.806: INFO: update-demo-nautilus-gz8r4 is created but not running Feb 11 11:27:08.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:09.020: INFO: stderr: "" Feb 11 11:27:09.021: INFO: stdout: "update-demo-nautilus-gz8r4 update-demo-nautilus-pw5lr " Feb 11 11:27:09.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gz8r4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:09.162: INFO: stderr: "" Feb 11 11:27:09.162: INFO: stdout: "true" Feb 11 11:27:09.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gz8r4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:09.323: INFO: stderr: "" Feb 11 11:27:09.323: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 11 11:27:09.323: INFO: validating pod update-demo-nautilus-gz8r4 Feb 11 11:27:09.338: INFO: got data: { "image": "nautilus.jpg" } Feb 11 11:27:09.338: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 11 11:27:09.338: INFO: update-demo-nautilus-gz8r4 is verified up and running Feb 11 11:27:09.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw5lr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:09.573: INFO: stderr: "" Feb 11 11:27:09.574: INFO: stdout: "true" Feb 11 11:27:09.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw5lr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:09.726: INFO: stderr: "" Feb 11 11:27:09.726: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 11 11:27:09.726: INFO: validating pod update-demo-nautilus-pw5lr Feb 11 11:27:09.738: INFO: got data: { "image": "nautilus.jpg" } Feb 11 11:27:09.739: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 11 11:27:09.739: INFO: update-demo-nautilus-pw5lr is verified up and running STEP: rolling-update to new replication controller Feb 11 11:27:09.742: INFO: scanned /root for discovery docs: Feb 11 11:27:09.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:45.660: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 11 11:27:45.662: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 11 11:27:45.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:45.977: INFO: stderr: "" Feb 11 11:27:45.977: INFO: stdout: "update-demo-kitten-l55rf update-demo-kitten-nqkxx " Feb 11 11:27:45.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l55rf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:46.137: INFO: stderr: "" Feb 11 11:27:46.137: INFO: stdout: "true" Feb 11 11:27:46.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l55rf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:46.278: INFO: stderr: "" Feb 11 11:27:46.278: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 11 11:27:46.278: INFO: validating pod update-demo-kitten-l55rf Feb 11 11:27:46.293: INFO: got data: { "image": "kitten.jpg" } Feb 11 11:27:46.294: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 11 11:27:46.294: INFO: update-demo-kitten-l55rf is verified up and running Feb 11 11:27:46.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nqkxx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:46.452: INFO: stderr: "" Feb 11 11:27:46.452: INFO: stdout: "true" Feb 11 11:27:46.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nqkxx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xwrsj' Feb 11 11:27:46.612: INFO: stderr: "" Feb 11 11:27:46.613: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 11 11:27:46.613: INFO: validating pod update-demo-kitten-nqkxx Feb 11 11:27:46.640: INFO: got data: { "image": "kitten.jpg" } Feb 11 11:27:46.640: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 11 11:27:46.640: INFO: update-demo-kitten-nqkxx is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:27:46.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xwrsj" for this suite. Feb 11 11:28:26.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:28:26.915: INFO: namespace: e2e-tests-kubectl-xwrsj, resource: bindings, ignored listing per whitelist Feb 11 11:28:26.969: INFO: namespace e2e-tests-kubectl-xwrsj deletion completed in 40.317208114s • [SLOW TEST:102.761 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:28:26.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 11 11:28:27.329: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 11 11:28:27.355: INFO: Waiting for terminating namespaces to be deleted... Feb 11 11:28:27.363: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 11 11:28:27.382: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:28:27.382: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 11 11:28:27.382: INFO: Container weave ready: true, restart count 0 Feb 11 11:28:27.382: INFO: Container weave-npc ready: true, restart count 0 Feb 11 11:28:27.382: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 11 11:28:27.382: INFO: Container coredns ready: true, restart count 0 Feb 11 11:28:27.382: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:28:27.382: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:28:27.382: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:28:27.382: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 11 11:28:27.382: INFO: Container coredns ready: true, restart count 0 Feb 11 11:28:27.382: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 11 11:28:27.382: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f25592a7834572], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:28:28.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-9wgvl" for this suite. Feb 11 11:28:34.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:28:34.886: INFO: namespace: e2e-tests-sched-pred-9wgvl, resource: bindings, ignored listing per whitelist Feb 11 11:28:34.891: INFO: namespace e2e-tests-sched-pred-9wgvl deletion completed in 6.27603541s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.921 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:28:34.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-a3e99923-4cc1-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume secrets Feb 11 11:28:35.098: INFO: Waiting up to 5m0s for pod "pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005" in namespace "e2e-tests-secrets-cl8x9" to be "success or failure" Feb 11 11:28:35.151: INFO: Pod "pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.918248ms Feb 11 11:28:39.335: INFO: Pod "pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236768035s Feb 11 11:28:41.349: INFO: Pod "pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25153056s Feb 11 11:28:43.606: INFO: Pod "pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.508281716s Feb 11 11:28:45.630: INFO: Pod "pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.532471454s Feb 11 11:28:47.703: INFO: Pod "pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.604797825s Feb 11 11:28:49.717: INFO: Pod "pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.61917329s STEP: Saw pod success Feb 11 11:28:49.717: INFO: Pod "pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:28:49.724: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 11 11:28:49.821: INFO: Waiting for pod pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005 to disappear Feb 11 11:28:50.049: INFO: Pod pod-secrets-a3eb4db5-4cc1-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:28:50.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-cl8x9" for this suite. Feb 11 11:28:56.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:28:56.523: INFO: namespace: e2e-tests-secrets-cl8x9, resource: bindings, ignored listing per whitelist Feb 11 11:28:56.587: INFO: namespace e2e-tests-secrets-cl8x9 deletion completed in 6.519710791s • [SLOW TEST:21.695 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:28:56.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Feb 11 11:28:56.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-jlhmd run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 11 11:29:08.148: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0211 11:29:06.649997 1012 log.go:172] (0xc00084e0b0) (0xc00072c960) Create stream\nI0211 11:29:06.650228 1012 log.go:172] (0xc00084e0b0) (0xc00072c960) Stream added, broadcasting: 1\nI0211 11:29:06.659494 1012 log.go:172] (0xc00084e0b0) Reply frame received for 1\nI0211 11:29:06.659525 1012 log.go:172] (0xc00084e0b0) (0xc00072ca00) Create stream\nI0211 11:29:06.659534 1012 log.go:172] (0xc00084e0b0) (0xc00072ca00) Stream added, broadcasting: 3\nI0211 11:29:06.661042 1012 log.go:172] (0xc00084e0b0) Reply frame received for 3\nI0211 11:29:06.661126 1012 log.go:172] (0xc00084e0b0) (0xc0007b4000) Create stream\nI0211 11:29:06.661172 1012 log.go:172] (0xc00084e0b0) (0xc0007b4000) Stream added, broadcasting: 5\nI0211 11:29:06.662537 1012 log.go:172] (0xc00084e0b0) Reply frame received for 5\nI0211 11:29:06.662629 1012 log.go:172] (0xc00084e0b0) (0xc0006d0000) Create stream\nI0211 11:29:06.662643 1012 log.go:172] (0xc00084e0b0) (0xc0006d0000) Stream added, broadcasting: 7\nI0211 11:29:06.663921 1012 log.go:172] (0xc00084e0b0) Reply frame received for 7\nI0211 11:29:06.664282 1012 log.go:172] (0xc00072ca00) (3) Writing data frame\nI0211 11:29:06.664470 1012 log.go:172] (0xc00072ca00) (3) Writing data frame\nI0211 11:29:06.675298 1012 log.go:172] (0xc00084e0b0) Data frame received for 5\nI0211 11:29:06.675321 1012 log.go:172] (0xc0007b4000) (5) Data frame handling\nI0211 11:29:06.675340 1012 log.go:172] (0xc0007b4000) (5) Data frame sent\nI0211 11:29:06.678705 1012 log.go:172] (0xc00084e0b0) Data frame received for 5\nI0211 11:29:06.678723 1012 log.go:172] (0xc0007b4000) (5) Data frame handling\nI0211 11:29:06.678730 1012 log.go:172] (0xc0007b4000) (5) Data frame sent\nI0211 11:29:08.050237 1012 log.go:172] (0xc00084e0b0) Data frame received for 1\nI0211 11:29:08.050391 1012 log.go:172] (0xc00084e0b0) (0xc00072ca00) Stream removed, broadcasting: 3\nI0211 11:29:08.050489 1012 log.go:172] (0xc00072c960) (1) Data frame handling\nI0211 11:29:08.050515 1012 log.go:172] (0xc00072c960) (1) Data frame sent\nI0211 11:29:08.050572 1012 log.go:172] (0xc00084e0b0) (0xc0007b4000) Stream removed, broadcasting: 5\nI0211 11:29:08.050597 1012 log.go:172] (0xc00084e0b0) (0xc00072c960) Stream removed, broadcasting: 1\nI0211 11:29:08.050664 1012 log.go:172] (0xc00084e0b0) (0xc0006d0000) Stream removed, broadcasting: 7\nI0211 11:29:08.050695 1012 log.go:172] (0xc00084e0b0) Go away received\nI0211 11:29:08.050802 1012 log.go:172] (0xc00084e0b0) (0xc00072c960) Stream removed, broadcasting: 1\nI0211 11:29:08.050820 1012 log.go:172] (0xc00084e0b0) (0xc00072ca00) Stream removed, broadcasting: 3\nI0211 11:29:08.050830 1012 log.go:172] (0xc00084e0b0) (0xc0007b4000) Stream removed, broadcasting: 5\nI0211 11:29:08.050842 1012 log.go:172] (0xc00084e0b0) (0xc0006d0000) Stream removed, broadcasting: 7\n" Feb 11 11:29:08.148: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:29:10.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jlhmd" for this suite. Feb 11 11:29:16.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:29:16.383: INFO: namespace: e2e-tests-kubectl-jlhmd, resource: bindings, ignored listing per whitelist Feb 11 11:29:16.669: INFO: namespace e2e-tests-kubectl-jlhmd deletion completed in 6.453609616s • [SLOW TEST:20.081 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:29:16.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 11 11:29:17.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcfc1c6c-4cc1-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-6d7sw" to be "success or failure" Feb 11 11:29:17.176: INFO: Pod "downwardapi-volume-bcfc1c6c-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.08617ms Feb 11 11:29:19.557: INFO: Pod "downwardapi-volume-bcfc1c6c-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396429816s Feb 11 11:29:21.583: INFO: Pod "downwardapi-volume-bcfc1c6c-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.422267103s Feb 11 11:29:23.602: INFO: Pod "downwardapi-volume-bcfc1c6c-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441600508s Feb 11 11:29:25.743: INFO: Pod "downwardapi-volume-bcfc1c6c-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.581873006s Feb 11 11:29:27.806: INFO: Pod "downwardapi-volume-bcfc1c6c-4cc1-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.645617672s STEP: Saw pod success Feb 11 11:29:27.807: INFO: Pod "downwardapi-volume-bcfc1c6c-4cc1-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:29:27.814: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bcfc1c6c-4cc1-11ea-a6e3-0242ac110005 container client-container: STEP: delete the pod Feb 11 11:29:28.094: INFO: Waiting for pod downwardapi-volume-bcfc1c6c-4cc1-11ea-a6e3-0242ac110005 to disappear Feb 11 11:29:28.115: INFO: Pod downwardapi-volume-bcfc1c6c-4cc1-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:29:28.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6d7sw" for this suite. Feb 11 11:29:34.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:29:34.304: INFO: namespace: e2e-tests-projected-6d7sw, resource: bindings, ignored listing per whitelist Feb 11 11:29:34.357: INFO: namespace e2e-tests-projected-6d7sw deletion completed in 6.191628267s • [SLOW TEST:17.687 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:29:34.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-zx2xr/configmap-test-c7689a34-4cc1-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 11 11:29:34.674: INFO: Waiting up to 5m0s for pod "pod-configmaps-c769f939-4cc1-11ea-a6e3-0242ac110005" in namespace "e2e-tests-configmap-zx2xr" to be "success or failure" Feb 11 11:29:34.689: INFO: Pod "pod-configmaps-c769f939-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.665874ms Feb 11 11:29:36.742: INFO: Pod "pod-configmaps-c769f939-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067898041s Feb 11 11:29:38.769: INFO: Pod "pod-configmaps-c769f939-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094253731s Feb 11 11:29:40.781: INFO: Pod "pod-configmaps-c769f939-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106887702s Feb 11 11:29:42.958: INFO: Pod "pod-configmaps-c769f939-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.283439367s Feb 11 11:29:44.968: INFO: Pod "pod-configmaps-c769f939-4cc1-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.293591027s STEP: Saw pod success Feb 11 11:29:44.968: INFO: Pod "pod-configmaps-c769f939-4cc1-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:29:44.973: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c769f939-4cc1-11ea-a6e3-0242ac110005 container env-test: STEP: delete the pod Feb 11 11:29:45.047: INFO: Waiting for pod pod-configmaps-c769f939-4cc1-11ea-a6e3-0242ac110005 to disappear Feb 11 11:29:45.097: INFO: Pod pod-configmaps-c769f939-4cc1-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:29:45.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zx2xr" for this suite. Feb 11 11:29:51.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:29:51.273: INFO: namespace: e2e-tests-configmap-zx2xr, resource: bindings, ignored listing per whitelist Feb 11 11:29:51.361: INFO: namespace e2e-tests-configmap-zx2xr deletion completed in 6.248389636s • [SLOW TEST:17.003 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:29:51.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 11 11:29:51.571: INFO: Waiting up to 5m0s for pod "client-containers-d18102d9-4cc1-11ea-a6e3-0242ac110005" in namespace "e2e-tests-containers-kgsrt" to be "success or failure" Feb 11 11:29:51.593: INFO: Pod "client-containers-d18102d9-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.662873ms Feb 11 11:29:53.631: INFO: Pod "client-containers-d18102d9-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059525927s Feb 11 11:29:55.661: INFO: Pod "client-containers-d18102d9-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089107776s Feb 11 11:29:57.675: INFO: Pod "client-containers-d18102d9-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103163855s Feb 11 11:29:59.710: INFO: Pod "client-containers-d18102d9-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138531929s Feb 11 11:30:01.724: INFO: Pod "client-containers-d18102d9-4cc1-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.152483673s STEP: Saw pod success Feb 11 11:30:01.724: INFO: Pod "client-containers-d18102d9-4cc1-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:30:01.729: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-d18102d9-4cc1-11ea-a6e3-0242ac110005 container test-container: STEP: delete the pod Feb 11 11:30:02.178: INFO: Waiting for pod client-containers-d18102d9-4cc1-11ea-a6e3-0242ac110005 to disappear Feb 11 11:30:02.208: INFO: Pod client-containers-d18102d9-4cc1-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:30:02.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-kgsrt" for this suite. Feb 11 11:30:08.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:30:08.609: INFO: namespace: e2e-tests-containers-kgsrt, resource: bindings, ignored listing per whitelist Feb 11 11:30:08.646: INFO: namespace e2e-tests-containers-kgsrt deletion completed in 6.40701283s • [SLOW TEST:17.285 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:30:08.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 11 11:30:08.901: INFO: Waiting up to 5m0s for pod "pod-dbd66d1e-4cc1-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-999fq" to be "success or failure" Feb 11 11:30:09.000: INFO: Pod "pod-dbd66d1e-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 98.177548ms Feb 11 11:30:11.015: INFO: Pod "pod-dbd66d1e-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113601015s Feb 11 11:30:13.036: INFO: Pod "pod-dbd66d1e-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134475615s Feb 11 11:30:15.793: INFO: Pod "pod-dbd66d1e-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.891488261s Feb 11 11:30:17.835: INFO: Pod "pod-dbd66d1e-4cc1-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.933864136s Feb 11 11:30:19.892: INFO: Pod "pod-dbd66d1e-4cc1-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.990189373s STEP: Saw pod success Feb 11 11:30:19.892: INFO: Pod "pod-dbd66d1e-4cc1-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:30:19.913: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-dbd66d1e-4cc1-11ea-a6e3-0242ac110005 container test-container: STEP: delete the pod Feb 11 11:30:20.155: INFO: Waiting for pod pod-dbd66d1e-4cc1-11ea-a6e3-0242ac110005 to disappear Feb 11 11:30:20.169: INFO: Pod pod-dbd66d1e-4cc1-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:30:20.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-999fq" for this suite. Feb 11 11:30:26.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:30:26.448: INFO: namespace: e2e-tests-emptydir-999fq, resource: bindings, ignored listing per whitelist Feb 11 11:30:26.604: INFO: namespace e2e-tests-emptydir-999fq deletion completed in 6.422856691s • [SLOW TEST:17.957 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:30:26.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 11 11:30:26.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-7rtlb' Feb 11 11:30:29.184: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 11 11:30:29.185: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Feb 11 11:30:29.231: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-c75bv] Feb 11 11:30:29.231: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-c75bv" in namespace "e2e-tests-kubectl-7rtlb" to be "running and ready" Feb 11 11:30:29.381: INFO: Pod "e2e-test-nginx-rc-c75bv": Phase="Pending", Reason="", readiness=false. Elapsed: 149.773572ms Feb 11 11:30:31.396: INFO: Pod "e2e-test-nginx-rc-c75bv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165000623s Feb 11 11:30:33.418: INFO: Pod "e2e-test-nginx-rc-c75bv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186941576s Feb 11 11:30:35.450: INFO: Pod "e2e-test-nginx-rc-c75bv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.218352977s Feb 11 11:30:37.476: INFO: Pod "e2e-test-nginx-rc-c75bv": Phase="Running", Reason="", readiness=true. Elapsed: 8.244889072s Feb 11 11:30:37.476: INFO: Pod "e2e-test-nginx-rc-c75bv" satisfied condition "running and ready" Feb 11 11:30:37.476: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-c75bv] Feb 11 11:30:37.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-7rtlb' Feb 11 11:30:37.776: INFO: stderr: "" Feb 11 11:30:37.776: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Feb 11 11:30:37.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-7rtlb' Feb 11 11:30:37.925: INFO: stderr: "" Feb 11 11:30:37.926: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:30:37.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7rtlb" for this suite. Feb 11 11:31:02.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:31:02.150: INFO: namespace: e2e-tests-kubectl-7rtlb, resource: bindings, ignored listing per whitelist Feb 11 11:31:02.294: INFO: namespace e2e-tests-kubectl-7rtlb deletion completed in 24.252032704s • [SLOW TEST:35.688 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:31:02.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 11 11:31:09.922: INFO: 10 pods remaining Feb 11 11:31:09.923: INFO: 6 pods has nil DeletionTimestamp Feb 11 11:31:09.923: INFO: STEP: Gathering metrics W0211 11:31:10.897316 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 11 11:31:10.897: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:31:10.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-nc6nt" for this suite. Feb 11 11:31:26.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:31:27.216: INFO: namespace: e2e-tests-gc-nc6nt, resource: bindings, ignored listing per whitelist Feb 11 11:31:27.275: INFO: namespace e2e-tests-gc-nc6nt deletion completed in 16.373405107s • [SLOW TEST:24.981 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:31:27.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0abd5e0c-4cc2-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume secrets Feb 11 11:31:27.693: INFO: Waiting up to 5m0s for pod "pod-secrets-0acb0893-4cc2-11ea-a6e3-0242ac110005" in namespace "e2e-tests-secrets-r64g5" to be "success or failure" Feb 11 11:31:27.710: INFO: Pod "pod-secrets-0acb0893-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.288488ms Feb 11 11:31:30.177: INFO: Pod "pod-secrets-0acb0893-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483313084s Feb 11 11:31:32.194: INFO: Pod "pod-secrets-0acb0893-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.500136373s Feb 11 11:31:34.778: INFO: Pod "pod-secrets-0acb0893-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.084816496s Feb 11 11:31:36.806: INFO: Pod "pod-secrets-0acb0893-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.112514235s Feb 11 11:31:38.822: INFO: Pod "pod-secrets-0acb0893-4cc2-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.12814807s STEP: Saw pod success Feb 11 11:31:38.822: INFO: Pod "pod-secrets-0acb0893-4cc2-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:31:38.847: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0acb0893-4cc2-11ea-a6e3-0242ac110005 container secret-env-test: STEP: delete the pod Feb 11 11:31:38.968: INFO: Waiting for pod pod-secrets-0acb0893-4cc2-11ea-a6e3-0242ac110005 to disappear Feb 11 11:31:39.036: INFO: Pod pod-secrets-0acb0893-4cc2-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:31:39.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-r64g5" for this suite. Feb 11 11:31:45.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:31:45.256: INFO: namespace: e2e-tests-secrets-r64g5, resource: bindings, ignored listing per whitelist Feb 11 11:31:45.369: INFO: namespace e2e-tests-secrets-r64g5 deletion completed in 6.318314215s • [SLOW TEST:18.094 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:31:45.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 11 11:31:45.707: INFO: Waiting up to 5m0s for pod "downward-api-158a195c-4cc2-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-9bvcm" to be "success or failure" Feb 11 11:31:45.768: INFO: Pod "downward-api-158a195c-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.493609ms Feb 11 11:31:47.782: INFO: Pod "downward-api-158a195c-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075612524s Feb 11 11:31:49.889: INFO: Pod "downward-api-158a195c-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182480894s Feb 11 11:31:51.909: INFO: Pod "downward-api-158a195c-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20178902s Feb 11 11:31:54.209: INFO: Pod "downward-api-158a195c-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.502234305s Feb 11 11:31:56.284: INFO: Pod "downward-api-158a195c-4cc2-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.576979661s STEP: Saw pod success Feb 11 11:31:56.284: INFO: Pod "downward-api-158a195c-4cc2-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:31:56.292: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-158a195c-4cc2-11ea-a6e3-0242ac110005 container dapi-container: STEP: delete the pod Feb 11 11:31:56.448: INFO: Waiting for pod downward-api-158a195c-4cc2-11ea-a6e3-0242ac110005 to disappear Feb 11 11:31:56.633: INFO: Pod downward-api-158a195c-4cc2-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:31:56.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9bvcm" for this suite. Feb 11 11:32:02.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:32:02.792: INFO: namespace: e2e-tests-downward-api-9bvcm, resource: bindings, ignored listing per whitelist Feb 11 11:32:02.831: INFO: namespace e2e-tests-downward-api-9bvcm deletion completed in 6.167260396s • [SLOW TEST:17.462 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:32:02.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:32:11.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-2spwv" for this suite. Feb 11 11:33:07.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:33:07.218: INFO: namespace: e2e-tests-kubelet-test-2spwv, resource: bindings, ignored listing per whitelist Feb 11 11:33:07.331: INFO: namespace e2e-tests-kubelet-test-2spwv deletion completed in 56.232903648s • [SLOW TEST:64.499 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:33:07.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 11 11:33:18.479: INFO: Successfully updated pod "annotationupdate4662c5e8-4cc2-11ea-a6e3-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:33:20.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mkm2f" for this suite. Feb 11 11:33:44.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:33:45.117: INFO: namespace: e2e-tests-projected-mkm2f, resource: bindings, ignored listing per whitelist Feb 11 11:33:45.234: INFO: namespace e2e-tests-projected-mkm2f deletion completed in 24.472911165s • [SLOW TEST:37.902 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:33:45.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 11 11:34:11.756: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w6pd9 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:34:11.757: INFO: >>> kubeConfig: /root/.kube/config I0211 11:34:11.884172 9 log.go:172] (0xc00092fd90) (0xc0012b6780) Create stream I0211 11:34:11.884586 9 log.go:172] (0xc00092fd90) (0xc0012b6780) Stream added, broadcasting: 1 I0211 11:34:11.892675 9 log.go:172] (0xc00092fd90) Reply frame received for 1 I0211 11:34:11.892718 9 log.go:172] (0xc00092fd90) (0xc0012b68c0) Create stream I0211 11:34:11.892728 9 log.go:172] (0xc00092fd90) (0xc0012b68c0) Stream added, broadcasting: 3 I0211 11:34:11.893788 9 log.go:172] (0xc00092fd90) Reply frame received for 3 I0211 11:34:11.893827 9 log.go:172] (0xc00092fd90) (0xc00044d5e0) Create stream I0211 11:34:11.893840 9 log.go:172] (0xc00092fd90) (0xc00044d5e0) Stream added, broadcasting: 5 I0211 11:34:11.896800 9 log.go:172] (0xc00092fd90) Reply frame received for 5 I0211 11:34:12.079119 9 log.go:172] (0xc00092fd90) Data frame received for 3 I0211 11:34:12.079302 9 log.go:172] (0xc0012b68c0) (3) Data frame handling I0211 11:34:12.079359 9 log.go:172] (0xc0012b68c0) (3) Data frame sent I0211 11:34:12.276690 9 log.go:172] (0xc00092fd90) Data frame received for 1 I0211 11:34:12.276859 9 log.go:172] (0xc00092fd90) (0xc00044d5e0) Stream removed, broadcasting: 5 I0211 11:34:12.276927 9 log.go:172] (0xc0012b6780) (1) Data frame handling I0211 11:34:12.276983 9 log.go:172] (0xc00092fd90) (0xc0012b68c0) Stream removed, broadcasting: 3 I0211 11:34:12.277029 9 log.go:172] (0xc0012b6780) (1) Data frame sent I0211 11:34:12.277053 9 log.go:172] (0xc00092fd90) (0xc0012b6780) Stream removed, broadcasting: 1 I0211 11:34:12.277078 9 log.go:172] (0xc00092fd90) Go away received I0211 11:34:12.277455 9 log.go:172] (0xc00092fd90) (0xc0012b6780) Stream removed, broadcasting: 1 I0211 11:34:12.277481 9 log.go:172] (0xc00092fd90) (0xc0012b68c0) Stream removed, broadcasting: 3 I0211 11:34:12.277497 9 log.go:172] (0xc00092fd90) (0xc00044d5e0) Stream removed, broadcasting: 5 Feb 11 11:34:12.277: INFO: Exec stderr: "" Feb 11 11:34:12.277: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w6pd9 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:34:12.277: INFO: >>> kubeConfig: /root/.kube/config I0211 11:34:12.389234 9 log.go:172] (0xc0024302c0) (0xc001cba000) Create stream I0211 11:34:12.389588 9 log.go:172] (0xc0024302c0) (0xc001cba000) Stream added, broadcasting: 1 I0211 11:34:12.399162 9 log.go:172] (0xc0024302c0) Reply frame received for 1 I0211 11:34:12.399203 9 log.go:172] (0xc0024302c0) (0xc000233f40) Create stream I0211 11:34:12.399212 9 log.go:172] (0xc0024302c0) (0xc000233f40) Stream added, broadcasting: 3 I0211 11:34:12.400734 9 log.go:172] (0xc0024302c0) Reply frame received for 3 I0211 11:34:12.400890 9 log.go:172] (0xc0024302c0) (0xc001353180) Create stream I0211 11:34:12.400911 9 log.go:172] (0xc0024302c0) (0xc001353180) Stream added, broadcasting: 5 I0211 11:34:12.402244 9 log.go:172] (0xc0024302c0) Reply frame received for 5 I0211 11:34:12.616514 9 log.go:172] (0xc0024302c0) Data frame received for 3 I0211 11:34:12.616645 9 log.go:172] (0xc000233f40) (3) Data frame handling I0211 11:34:12.616672 9 log.go:172] (0xc000233f40) (3) Data frame sent I0211 11:34:12.743815 9 log.go:172] (0xc0024302c0) Data frame received for 1 I0211 11:34:12.743951 9 log.go:172] (0xc0024302c0) (0xc001353180) Stream removed, broadcasting: 5 I0211 11:34:12.744031 9 log.go:172] (0xc001cba000) (1) Data frame handling I0211 11:34:12.744066 9 log.go:172] (0xc001cba000) (1) Data frame sent I0211 11:34:12.744081 9 log.go:172] (0xc0024302c0) (0xc000233f40) Stream removed, broadcasting: 3 I0211 11:34:12.744137 9 log.go:172] (0xc0024302c0) (0xc001cba000) Stream removed, broadcasting: 1 I0211 11:34:12.744155 9 log.go:172] (0xc0024302c0) Go away received I0211 11:34:12.744486 9 log.go:172] (0xc0024302c0) (0xc001cba000) Stream removed, broadcasting: 1 I0211 11:34:12.744500 9 log.go:172] (0xc0024302c0) (0xc000233f40) Stream removed, broadcasting: 3 I0211 11:34:12.744504 9 log.go:172] (0xc0024302c0) (0xc001353180) Stream removed, broadcasting: 5 Feb 11 11:34:12.744: INFO: Exec stderr: "" Feb 11 11:34:12.744: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w6pd9 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:34:12.744: INFO: >>> kubeConfig: /root/.kube/config I0211 11:34:12.824042 9 log.go:172] (0xc0025782c0) (0xc0013534a0) Create stream I0211 11:34:12.824237 9 log.go:172] (0xc0025782c0) (0xc0013534a0) Stream added, broadcasting: 1 I0211 11:34:12.831650 9 log.go:172] (0xc0025782c0) Reply frame received for 1 I0211 11:34:12.831749 9 log.go:172] (0xc0025782c0) (0xc001cba140) Create stream I0211 11:34:12.831768 9 log.go:172] (0xc0025782c0) (0xc001cba140) Stream added, broadcasting: 3 I0211 11:34:12.835061 9 log.go:172] (0xc0025782c0) Reply frame received for 3 I0211 11:34:12.835131 9 log.go:172] (0xc0025782c0) (0xc0010bbd60) Create stream I0211 11:34:12.835163 9 log.go:172] (0xc0025782c0) (0xc0010bbd60) Stream added, broadcasting: 5 I0211 11:34:12.837632 9 log.go:172] (0xc0025782c0) Reply frame received for 5 I0211 11:34:12.968057 9 log.go:172] (0xc0025782c0) Data frame received for 3 I0211 11:34:12.968204 9 log.go:172] (0xc001cba140) (3) Data frame handling I0211 11:34:12.968245 9 log.go:172] (0xc001cba140) (3) Data frame sent I0211 11:34:13.100446 9 log.go:172] (0xc0025782c0) Data frame received for 1 I0211 11:34:13.100771 9 log.go:172] (0xc0025782c0) (0xc0010bbd60) Stream removed, broadcasting: 5 I0211 11:34:13.100918 9 log.go:172] (0xc0013534a0) (1) Data frame handling I0211 11:34:13.101083 9 log.go:172] (0xc0013534a0) (1) Data frame sent I0211 11:34:13.101141 9 log.go:172] (0xc0025782c0) (0xc001cba140) Stream removed, broadcasting: 3 I0211 11:34:13.101226 9 log.go:172] (0xc0025782c0) (0xc0013534a0) Stream removed, broadcasting: 1 I0211 11:34:13.101272 9 log.go:172] (0xc0025782c0) Go away received I0211 11:34:13.101673 9 log.go:172] (0xc0025782c0) (0xc0013534a0) Stream removed, broadcasting: 1 I0211 11:34:13.101751 9 log.go:172] (0xc0025782c0) (0xc001cba140) Stream removed, broadcasting: 3 I0211 11:34:13.101766 9 log.go:172] (0xc0025782c0) (0xc0010bbd60) Stream removed, broadcasting: 5 Feb 11 11:34:13.101: INFO: Exec stderr: "" Feb 11 11:34:13.101: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w6pd9 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:34:13.102: INFO: >>> kubeConfig: /root/.kube/config I0211 11:34:13.170361 9 log.go:172] (0xc002578790) (0xc001353720) Create stream I0211 11:34:13.170599 9 log.go:172] (0xc002578790) (0xc001353720) Stream added, broadcasting: 1 I0211 11:34:13.174782 9 log.go:172] (0xc002578790) Reply frame received for 1 I0211 11:34:13.174882 9 log.go:172] (0xc002578790) (0xc0010b0dc0) Create stream I0211 11:34:13.174903 9 log.go:172] (0xc002578790) (0xc0010b0dc0) Stream added, broadcasting: 3 I0211 11:34:13.177671 9 log.go:172] (0xc002578790) Reply frame received for 3 I0211 11:34:13.177702 9 log.go:172] (0xc002578790) (0xc001cba280) Create stream I0211 11:34:13.177712 9 log.go:172] (0xc002578790) (0xc001cba280) Stream added, broadcasting: 5 I0211 11:34:13.187002 9 log.go:172] (0xc002578790) Reply frame received for 5 I0211 11:34:13.297286 9 log.go:172] (0xc002578790) Data frame received for 3 I0211 11:34:13.297978 9 log.go:172] (0xc0010b0dc0) (3) Data frame handling I0211 11:34:13.298158 9 log.go:172] (0xc0010b0dc0) (3) Data frame sent I0211 11:34:13.415430 9 log.go:172] (0xc002578790) (0xc0010b0dc0) Stream removed, broadcasting: 3 I0211 11:34:13.415733 9 log.go:172] (0xc002578790) Data frame received for 1 I0211 11:34:13.415852 9 log.go:172] (0xc002578790) (0xc001cba280) Stream removed, broadcasting: 5 I0211 11:34:13.416053 9 log.go:172] (0xc001353720) (1) Data frame handling I0211 11:34:13.416118 9 log.go:172] (0xc001353720) (1) Data frame sent I0211 11:34:13.416197 9 log.go:172] (0xc002578790) (0xc001353720) Stream removed, broadcasting: 1 I0211 11:34:13.416229 9 log.go:172] (0xc002578790) Go away received I0211 11:34:13.417307 9 log.go:172] (0xc002578790) (0xc001353720) Stream removed, broadcasting: 1 I0211 11:34:13.417350 9 log.go:172] (0xc002578790) (0xc0010b0dc0) Stream removed, broadcasting: 3 I0211 11:34:13.417367 9 log.go:172] (0xc002578790) (0xc001cba280) Stream removed, broadcasting: 5 Feb 11 11:34:13.417: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 11 11:34:13.417: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w6pd9 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:34:13.418: INFO: >>> kubeConfig: /root/.kube/config I0211 11:34:13.502067 9 log.go:172] (0xc0024c42c0) (0xc001e021e0) Create stream I0211 11:34:13.502213 9 log.go:172] (0xc0024c42c0) (0xc001e021e0) Stream added, broadcasting: 1 I0211 11:34:13.508470 9 log.go:172] (0xc0024c42c0) Reply frame received for 1 I0211 11:34:13.508530 9 log.go:172] (0xc0024c42c0) (0xc001e02320) Create stream I0211 11:34:13.508545 9 log.go:172] (0xc0024c42c0) (0xc001e02320) Stream added, broadcasting: 3 I0211 11:34:13.509428 9 log.go:172] (0xc0024c42c0) Reply frame received for 3 I0211 11:34:13.509462 9 log.go:172] (0xc0024c42c0) (0xc0010b0e60) Create stream I0211 11:34:13.509473 9 log.go:172] (0xc0024c42c0) (0xc0010b0e60) Stream added, broadcasting: 5 I0211 11:34:13.510479 9 log.go:172] (0xc0024c42c0) Reply frame received for 5 I0211 11:34:13.640875 9 log.go:172] (0xc0024c42c0) Data frame received for 3 I0211 11:34:13.641007 9 log.go:172] (0xc001e02320) (3) Data frame handling I0211 11:34:13.641061 9 log.go:172] (0xc001e02320) (3) Data frame sent I0211 11:34:13.919504 9 log.go:172] (0xc0024c42c0) Data frame received for 1 I0211 11:34:13.919617 9 log.go:172] (0xc001e021e0) (1) Data frame handling I0211 11:34:13.919650 9 log.go:172] (0xc001e021e0) (1) Data frame sent I0211 11:34:13.919666 9 log.go:172] (0xc0024c42c0) (0xc001e021e0) Stream removed, broadcasting: 1 I0211 11:34:13.919858 9 log.go:172] (0xc0024c42c0) (0xc001e02320) Stream removed, broadcasting: 3 I0211 11:34:13.919919 9 log.go:172] (0xc0024c42c0) (0xc0010b0e60) Stream removed, broadcasting: 5 I0211 11:34:13.919979 9 log.go:172] (0xc0024c42c0) (0xc001e021e0) Stream removed, broadcasting: 1 I0211 11:34:13.920017 9 log.go:172] (0xc0024c42c0) (0xc001e02320) Stream removed, broadcasting: 3 I0211 11:34:13.920034 9 log.go:172] (0xc0024c42c0) (0xc0010b0e60) Stream removed, broadcasting: 5 I0211 11:34:13.920333 9 log.go:172] (0xc0024c42c0) Go away received Feb 11 11:34:13.920: INFO: Exec stderr: "" Feb 11 11:34:13.920: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w6pd9 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:34:13.920: INFO: >>> kubeConfig: /root/.kube/config I0211 11:34:14.000888 9 log.go:172] (0xc0018d02c0) (0xc0010bbf40) Create stream I0211 11:34:14.001219 9 log.go:172] (0xc0018d02c0) (0xc0010bbf40) Stream added, broadcasting: 1 I0211 11:34:14.005893 9 log.go:172] (0xc0018d02c0) Reply frame received for 1 I0211 11:34:14.005968 9 log.go:172] (0xc0018d02c0) (0xc001e023c0) Create stream I0211 11:34:14.005984 9 log.go:172] (0xc0018d02c0) (0xc001e023c0) Stream added, broadcasting: 3 I0211 11:34:14.008523 9 log.go:172] (0xc0018d02c0) Reply frame received for 3 I0211 11:34:14.008574 9 log.go:172] (0xc0018d02c0) (0xc0013537c0) Create stream I0211 11:34:14.008591 9 log.go:172] (0xc0018d02c0) (0xc0013537c0) Stream added, broadcasting: 5 I0211 11:34:14.009639 9 log.go:172] (0xc0018d02c0) Reply frame received for 5 I0211 11:34:14.200875 9 log.go:172] (0xc0018d02c0) Data frame received for 3 I0211 11:34:14.201039 9 log.go:172] (0xc001e023c0) (3) Data frame handling I0211 11:34:14.201082 9 log.go:172] (0xc001e023c0) (3) Data frame sent I0211 11:34:14.302294 9 log.go:172] (0xc0018d02c0) (0xc001e023c0) Stream removed, broadcasting: 3 I0211 11:34:14.302529 9 log.go:172] (0xc0018d02c0) Data frame received for 1 I0211 11:34:14.302601 9 log.go:172] (0xc0018d02c0) (0xc0013537c0) Stream removed, broadcasting: 5 I0211 11:34:14.302656 9 log.go:172] (0xc0010bbf40) (1) Data frame handling I0211 11:34:14.302704 9 log.go:172] (0xc0010bbf40) (1) Data frame sent I0211 11:34:14.302736 9 log.go:172] (0xc0018d02c0) (0xc0010bbf40) Stream removed, broadcasting: 1 I0211 11:34:14.302847 9 log.go:172] (0xc0018d02c0) Go away received I0211 11:34:14.303566 9 log.go:172] (0xc0018d02c0) (0xc0010bbf40) Stream removed, broadcasting: 1 I0211 11:34:14.303662 9 log.go:172] (0xc0018d02c0) (0xc001e023c0) Stream removed, broadcasting: 3 I0211 11:34:14.303683 9 log.go:172] (0xc0018d02c0) (0xc0013537c0) Stream removed, broadcasting: 5 Feb 11 11:34:14.303: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 11 11:34:14.303: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w6pd9 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:34:14.303: INFO: >>> kubeConfig: /root/.kube/config I0211 11:34:14.398120 9 log.go:172] (0xc002578c60) (0xc001353a40) Create stream I0211 11:34:14.398462 9 log.go:172] (0xc002578c60) (0xc001353a40) Stream added, broadcasting: 1 I0211 11:34:14.404528 9 log.go:172] (0xc002578c60) Reply frame received for 1 I0211 11:34:14.404578 9 log.go:172] (0xc002578c60) (0xc001b04000) Create stream I0211 11:34:14.404591 9 log.go:172] (0xc002578c60) (0xc001b04000) Stream added, broadcasting: 3 I0211 11:34:14.406256 9 log.go:172] (0xc002578c60) Reply frame received for 3 I0211 11:34:14.406302 9 log.go:172] (0xc002578c60) (0xc0010b0fa0) Create stream I0211 11:34:14.406317 9 log.go:172] (0xc002578c60) (0xc0010b0fa0) Stream added, broadcasting: 5 I0211 11:34:14.407328 9 log.go:172] (0xc002578c60) Reply frame received for 5 I0211 11:34:14.873267 9 log.go:172] (0xc002578c60) Data frame received for 3 I0211 11:34:14.873537 9 log.go:172] (0xc001b04000) (3) Data frame handling I0211 11:34:14.873598 9 log.go:172] (0xc001b04000) (3) Data frame sent I0211 11:34:15.055841 9 log.go:172] (0xc002578c60) Data frame received for 1 I0211 11:34:15.055999 9 log.go:172] (0xc002578c60) (0xc001b04000) Stream removed, broadcasting: 3 I0211 11:34:15.056060 9 log.go:172] (0xc001353a40) (1) Data frame handling I0211 11:34:15.056079 9 log.go:172] (0xc001353a40) (1) Data frame sent I0211 11:34:15.056119 9 log.go:172] (0xc002578c60) (0xc0010b0fa0) Stream removed, broadcasting: 5 I0211 11:34:15.056218 9 log.go:172] (0xc002578c60) (0xc001353a40) Stream removed, broadcasting: 1 I0211 11:34:15.056244 9 log.go:172] (0xc002578c60) Go away received I0211 11:34:15.056557 9 log.go:172] (0xc002578c60) (0xc001353a40) Stream removed, broadcasting: 1 I0211 11:34:15.056570 9 log.go:172] (0xc002578c60) (0xc001b04000) Stream removed, broadcasting: 3 I0211 11:34:15.056579 9 log.go:172] (0xc002578c60) (0xc0010b0fa0) Stream removed, broadcasting: 5 Feb 11 11:34:15.056: INFO: Exec stderr: "" Feb 11 11:34:15.056: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w6pd9 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:34:15.056: INFO: >>> kubeConfig: /root/.kube/config I0211 11:34:15.159169 9 log.go:172] (0xc002579130) (0xc001353cc0) Create stream I0211 11:34:15.159380 9 log.go:172] (0xc002579130) (0xc001353cc0) Stream added, broadcasting: 1 I0211 11:34:15.187180 9 log.go:172] (0xc002579130) Reply frame received for 1 I0211 11:34:15.187375 9 log.go:172] (0xc002579130) (0xc00044c140) Create stream I0211 11:34:15.187398 9 log.go:172] (0xc002579130) (0xc00044c140) Stream added, broadcasting: 3 I0211 11:34:15.189190 9 log.go:172] (0xc002579130) Reply frame received for 3 I0211 11:34:15.189293 9 log.go:172] (0xc002579130) (0xc0010ba000) Create stream I0211 11:34:15.189309 9 log.go:172] (0xc002579130) (0xc0010ba000) Stream added, broadcasting: 5 I0211 11:34:15.190633 9 log.go:172] (0xc002579130) Reply frame received for 5 I0211 11:34:15.296532 9 log.go:172] (0xc002579130) Data frame received for 3 I0211 11:34:15.296616 9 log.go:172] (0xc00044c140) (3) Data frame handling I0211 11:34:15.296675 9 log.go:172] (0xc00044c140) (3) Data frame sent I0211 11:34:15.425378 9 log.go:172] (0xc002579130) Data frame received for 1 I0211 11:34:15.425535 9 log.go:172] (0xc001353cc0) (1) Data frame handling I0211 11:34:15.425575 9 log.go:172] (0xc001353cc0) (1) Data frame sent I0211 11:34:15.425656 9 log.go:172] (0xc002579130) (0xc001353cc0) Stream removed, broadcasting: 1 I0211 11:34:15.426698 9 log.go:172] (0xc002579130) (0xc00044c140) Stream removed, broadcasting: 3 I0211 11:34:15.426807 9 log.go:172] (0xc002579130) (0xc0010ba000) Stream removed, broadcasting: 5 I0211 11:34:15.426829 9 log.go:172] (0xc002579130) Go away received I0211 11:34:15.427526 9 log.go:172] (0xc002579130) (0xc001353cc0) Stream removed, broadcasting: 1 I0211 11:34:15.427671 9 log.go:172] (0xc002579130) (0xc00044c140) Stream removed, broadcasting: 3 I0211 11:34:15.427689 9 log.go:172] (0xc002579130) (0xc0010ba000) Stream removed, broadcasting: 5 Feb 11 11:34:15.427: INFO: Exec stderr: "" Feb 11 11:34:15.427: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w6pd9 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:34:15.428: INFO: >>> kubeConfig: /root/.kube/config I0211 11:34:15.535700 9 log.go:172] (0xc00092fa20) (0xc000bc2280) Create stream I0211 11:34:15.535885 9 log.go:172] (0xc00092fa20) (0xc000bc2280) Stream added, broadcasting: 1 I0211 11:34:15.542098 9 log.go:172] (0xc00092fa20) Reply frame received for 1 I0211 11:34:15.542150 9 log.go:172] (0xc00092fa20) (0xc0010ba280) Create stream I0211 11:34:15.542165 9 log.go:172] (0xc00092fa20) (0xc0010ba280) Stream added, broadcasting: 3 I0211 11:34:15.543713 9 log.go:172] (0xc00092fa20) Reply frame received for 3 I0211 11:34:15.543735 9 log.go:172] (0xc00092fa20) (0xc0010ba460) Create stream I0211 11:34:15.543745 9 log.go:172] (0xc00092fa20) (0xc0010ba460) Stream added, broadcasting: 5 I0211 11:34:15.545493 9 log.go:172] (0xc00092fa20) Reply frame received for 5 I0211 11:34:15.650789 9 log.go:172] (0xc00092fa20) Data frame received for 3 I0211 11:34:15.651070 9 log.go:172] (0xc0010ba280) (3) Data frame handling I0211 11:34:15.651135 9 log.go:172] (0xc0010ba280) (3) Data frame sent I0211 11:34:15.779166 9 log.go:172] (0xc00092fa20) Data frame received for 1 I0211 11:34:15.779275 9 log.go:172] (0xc00092fa20) (0xc0010ba460) Stream removed, broadcasting: 5 I0211 11:34:15.779374 9 log.go:172] (0xc000bc2280) (1) Data frame handling I0211 11:34:15.779437 9 log.go:172] (0xc000bc2280) (1) Data frame sent I0211 11:34:15.779507 9 log.go:172] (0xc00092fa20) (0xc000bc2280) Stream removed, broadcasting: 1 I0211 11:34:15.779569 9 log.go:172] (0xc00092fa20) (0xc0010ba280) Stream removed, broadcasting: 3 I0211 11:34:15.779627 9 log.go:172] (0xc00092fa20) Go away received I0211 11:34:15.779842 9 log.go:172] (0xc00092fa20) (0xc000bc2280) Stream removed, broadcasting: 1 I0211 11:34:15.779859 9 log.go:172] (0xc00092fa20) (0xc0010ba280) Stream removed, broadcasting: 3 I0211 11:34:15.779894 9 log.go:172] (0xc00092fa20) (0xc0010ba460) Stream removed, broadcasting: 5 Feb 11 11:34:15.779: INFO: Exec stderr: "" Feb 11 11:34:15.780: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w6pd9 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:34:15.780: INFO: >>> kubeConfig: /root/.kube/config I0211 11:34:15.840406 9 log.go:172] (0xc0025782c0) (0xc00070e460) Create stream I0211 11:34:15.840578 9 log.go:172] (0xc0025782c0) (0xc00070e460) Stream added, broadcasting: 1 I0211 11:34:15.845484 9 log.go:172] (0xc0025782c0) Reply frame received for 1 I0211 11:34:15.845522 9 log.go:172] (0xc0025782c0) (0xc0010ba500) Create stream I0211 11:34:15.845536 9 log.go:172] (0xc0025782c0) (0xc0010ba500) Stream added, broadcasting: 3 I0211 11:34:15.846436 9 log.go:172] (0xc0025782c0) Reply frame received for 3 I0211 11:34:15.846470 9 log.go:172] (0xc0025782c0) (0xc0010ba640) Create stream I0211 11:34:15.846483 9 log.go:172] (0xc0025782c0) (0xc0010ba640) Stream added, broadcasting: 5 I0211 11:34:15.847987 9 log.go:172] (0xc0025782c0) Reply frame received for 5 I0211 11:34:15.965773 9 log.go:172] (0xc0025782c0) Data frame received for 3 I0211 11:34:15.965935 9 log.go:172] (0xc0010ba500) (3) Data frame handling I0211 11:34:15.965968 9 log.go:172] (0xc0010ba500) (3) Data frame sent I0211 11:34:16.096114 9 log.go:172] (0xc0025782c0) (0xc0010ba640) Stream removed, broadcasting: 5 I0211 11:34:16.096229 9 log.go:172] (0xc0025782c0) Data frame received for 1 I0211 11:34:16.096280 9 log.go:172] (0xc0025782c0) (0xc0010ba500) Stream removed, broadcasting: 3 I0211 11:34:16.096318 9 log.go:172] (0xc00070e460) (1) Data frame handling I0211 11:34:16.096336 9 log.go:172] (0xc00070e460) (1) Data frame sent I0211 11:34:16.096345 9 log.go:172] (0xc0025782c0) (0xc00070e460) Stream removed, broadcasting: 1 I0211 11:34:16.096357 9 log.go:172] (0xc0025782c0) Go away received I0211 11:34:16.096918 9 log.go:172] (0xc0025782c0) (0xc00070e460) Stream removed, broadcasting: 1 I0211 11:34:16.096939 9 log.go:172] (0xc0025782c0) (0xc0010ba500) Stream removed, broadcasting: 3 I0211 11:34:16.096955 9 log.go:172] (0xc0025782c0) (0xc0010ba640) Stream removed, broadcasting: 5 Feb 11 11:34:16.097: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:34:16.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-w6pd9" for this suite. Feb 11 11:35:12.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:35:12.230: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-w6pd9, resource: bindings, ignored listing per whitelist Feb 11 11:35:12.543: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-w6pd9 deletion completed in 56.430683183s • [SLOW TEST:87.308 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:35:12.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 11 11:35:12.915: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 11 11:35:18.227: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 11 11:35:22.260: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 11 11:35:22.354: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-tlk97,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tlk97/deployments/test-cleanup-deployment,UID:96a19512-4cc2-11ea-a994-fa163e34d433,ResourceVersion:21302966,Generation:1,CreationTimestamp:2020-02-11 11:35:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 11 11:35:23.493: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Feb 11 11:35:23.494: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 11 11:35:23.496: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-tlk97,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tlk97/replicasets/test-cleanup-controller,UID:9108651e-4cc2-11ea-a994-fa163e34d433,ResourceVersion:21302968,Generation:1,CreationTimestamp:2020-02-11 11:35:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 96a19512-4cc2-11ea-a994-fa163e34d433 0xc001aaa9d7 0xc001aaa9d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 11 11:35:23.675: INFO: Pod "test-cleanup-controller-kd9n5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-kd9n5,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-tlk97,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tlk97/pods/test-cleanup-controller-kd9n5,UID:910e6c60-4cc2-11ea-a994-fa163e34d433,ResourceVersion:21302963,Generation:0,CreationTimestamp:2020-02-11 11:35:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 9108651e-4cc2-11ea-a994-fa163e34d433 0xc0018b970f 0xc0018b9720}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7ndkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ndkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7ndkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018b9790} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018b97b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:35:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:35:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:35:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:35:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-11 11:35:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 11:35:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://27fede2868c38af6917ebc5e655900f6abf9c67fecf867152cf2c69a15dec845}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:35:23.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-tlk97" for this suite. Feb 11 11:35:34.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:35:34.963: INFO: namespace: e2e-tests-deployment-tlk97, resource: bindings, ignored listing per whitelist Feb 11 11:35:34.987: INFO: namespace e2e-tests-deployment-tlk97 deletion completed in 11.29019778s • [SLOW TEST:22.443 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:35:34.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6fcng.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6fcng.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6fcng.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6fcng.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6fcng.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6fcng.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 11 11:35:51.695: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005) Feb 11 11:35:51.703: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005) Feb 11 11:35:51.707: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005) Feb 11 11:35:51.712: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005) Feb 11 11:35:51.717: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005) Feb 11 11:35:51.723: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005) Feb 11 11:35:51.727: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6fcng.svc.cluster.local from pod e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005) Feb 11 11:35:51.731: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005) Feb 11 11:35:51.736: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005) Feb 11 11:35:51.740: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005: the server could not find the requested resource (get pods dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005) Feb 11 11:35:51.740: INFO: Lookups using e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6fcng.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 11 11:35:56.825: INFO: DNS probes using e2e-tests-dns-6fcng/dns-test-9e63133f-4cc2-11ea-a6e3-0242ac110005 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:35:56.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-6fcng" for this suite. Feb 11 11:36:03.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:36:03.132: INFO: namespace: e2e-tests-dns-6fcng, resource: bindings, ignored listing per whitelist Feb 11 11:36:03.192: INFO: namespace e2e-tests-dns-6fcng deletion completed in 6.27174626s • [SLOW TEST:28.204 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:36:03.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 11 11:36:03.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af24db7c-4cc2-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-xzlzr" to be "success or failure" Feb 11 11:36:03.434: INFO: Pod "downwardapi-volume-af24db7c-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.457755ms Feb 11 11:36:05.449: INFO: Pod "downwardapi-volume-af24db7c-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032655572s Feb 11 11:36:07.474: INFO: Pod "downwardapi-volume-af24db7c-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058223805s Feb 11 11:36:10.140: INFO: Pod "downwardapi-volume-af24db7c-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723593278s Feb 11 11:36:12.167: INFO: Pod "downwardapi-volume-af24db7c-4cc2-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.750589229s Feb 11 11:36:14.193: INFO: Pod "downwardapi-volume-af24db7c-4cc2-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.777255129s STEP: Saw pod success Feb 11 11:36:14.194: INFO: Pod "downwardapi-volume-af24db7c-4cc2-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:36:14.201: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-af24db7c-4cc2-11ea-a6e3-0242ac110005 container client-container: STEP: delete the pod Feb 11 11:36:14.485: INFO: Waiting for pod downwardapi-volume-af24db7c-4cc2-11ea-a6e3-0242ac110005 to disappear Feb 11 11:36:14.511: INFO: Pod downwardapi-volume-af24db7c-4cc2-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:36:14.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xzlzr" for this suite. Feb 11 11:36:20.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:36:20.929: INFO: namespace: e2e-tests-projected-xzlzr, resource: bindings, ignored listing per whitelist Feb 11 11:36:21.001: INFO: namespace e2e-tests-projected-xzlzr deletion completed in 6.325943467s • [SLOW TEST:17.809 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:36:21.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:37:24.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-4n94z" for this suite. Feb 11 11:37:32.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:37:32.753: INFO: namespace: e2e-tests-container-runtime-4n94z, resource: bindings, ignored listing per whitelist Feb 11 11:37:32.784: INFO: namespace e2e-tests-container-runtime-4n94z deletion completed in 8.224752722s • [SLOW TEST:71.782 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:37:32.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-4wnzx [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-4wnzx STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-4wnzx Feb 11 11:37:33.010: INFO: Found 0 stateful pods, waiting for 1 Feb 11 11:37:43.023: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Feb 11 11:37:53.040: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 11 11:37:53.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4wnzx ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 11 11:37:53.647: INFO: stderr: "I0211 11:37:53.274532 1105 log.go:172] (0xc0003464d0) (0xc0007a92c0) Create stream\nI0211 11:37:53.274702 1105 log.go:172] (0xc0003464d0) (0xc0007a92c0) Stream added, broadcasting: 1\nI0211 11:37:53.281655 1105 log.go:172] (0xc0003464d0) Reply frame received for 1\nI0211 11:37:53.281695 1105 log.go:172] (0xc0003464d0) (0xc000708000) Create stream\nI0211 11:37:53.281703 1105 log.go:172] (0xc0003464d0) (0xc000708000) Stream added, broadcasting: 3\nI0211 11:37:53.284030 1105 log.go:172] (0xc0003464d0) Reply frame received for 3\nI0211 11:37:53.284063 1105 log.go:172] (0xc0003464d0) (0xc000648000) Create stream\nI0211 11:37:53.284088 1105 log.go:172] (0xc0003464d0) (0xc000648000) Stream added, broadcasting: 5\nI0211 11:37:53.286118 1105 log.go:172] (0xc0003464d0) Reply frame received for 5\nI0211 11:37:53.490980 1105 log.go:172] (0xc0003464d0) Data frame received for 3\nI0211 11:37:53.491068 1105 log.go:172] (0xc000708000) (3) Data frame handling\nI0211 11:37:53.491137 1105 log.go:172] (0xc000708000) (3) Data frame sent\nI0211 11:37:53.633126 1105 log.go:172] (0xc0003464d0) Data frame received for 1\nI0211 11:37:53.633284 1105 log.go:172] (0xc0003464d0) (0xc000708000) Stream removed, broadcasting: 3\nI0211 11:37:53.633376 1105 log.go:172] (0xc0007a92c0) (1) Data frame handling\nI0211 11:37:53.633428 1105 log.go:172] (0xc0007a92c0) (1) Data frame sent\nI0211 11:37:53.633492 1105 log.go:172] (0xc0003464d0) (0xc000648000) Stream removed, broadcasting: 5\nI0211 11:37:53.633533 1105 log.go:172] (0xc0003464d0) (0xc0007a92c0) Stream removed, broadcasting: 1\nI0211 11:37:53.633557 1105 log.go:172] (0xc0003464d0) Go away received\nI0211 11:37:53.633918 1105 log.go:172] (0xc0003464d0) (0xc0007a92c0) Stream removed, broadcasting: 1\nI0211 11:37:53.633940 1105 log.go:172] (0xc0003464d0) (0xc000708000) Stream removed, broadcasting: 3\nI0211 11:37:53.633954 1105 log.go:172] (0xc0003464d0) (0xc000648000) Stream removed, broadcasting: 5\n" Feb 11 11:37:53.647: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 11 11:37:53.647: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 11 11:37:53.658: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 11 11:38:03.699: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 11 11:38:03.699: INFO: Waiting for statefulset status.replicas updated to 0 Feb 11 11:38:03.892: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999994813s Feb 11 11:38:04.911: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.88716444s Feb 11 11:38:05.933: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.86835182s Feb 11 11:38:06.955: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.846213287s Feb 11 11:38:07.976: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.824334777s Feb 11 11:38:08.998: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.803690506s Feb 11 11:38:10.013: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.781375913s Feb 11 11:38:12.848: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.766569563s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-4wnzx Feb 11 11:38:13.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4wnzx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:38:14.428: INFO: stderr: "I0211 11:38:14.110905 1127 log.go:172] (0xc000154630) (0xc000750640) Create stream\nI0211 11:38:14.111147 1127 log.go:172] (0xc000154630) (0xc000750640) Stream added, broadcasting: 1\nI0211 11:38:14.118969 1127 log.go:172] (0xc000154630) Reply frame received for 1\nI0211 11:38:14.119014 1127 log.go:172] (0xc000154630) (0xc000650dc0) Create stream\nI0211 11:38:14.119023 1127 log.go:172] (0xc000154630) (0xc000650dc0) Stream added, broadcasting: 3\nI0211 11:38:14.122429 1127 log.go:172] (0xc000154630) Reply frame received for 3\nI0211 11:38:14.122455 1127 log.go:172] (0xc000154630) (0xc0007506e0) Create stream\nI0211 11:38:14.122465 1127 log.go:172] (0xc000154630) (0xc0007506e0) Stream added, broadcasting: 5\nI0211 11:38:14.124070 1127 log.go:172] (0xc000154630) Reply frame received for 5\nI0211 11:38:14.284828 1127 log.go:172] (0xc000154630) Data frame received for 3\nI0211 11:38:14.284922 1127 log.go:172] (0xc000650dc0) (3) Data frame handling\nI0211 11:38:14.284944 1127 log.go:172] (0xc000650dc0) (3) Data frame sent\nI0211 11:38:14.416112 1127 log.go:172] (0xc000154630) Data frame received for 1\nI0211 11:38:14.416187 1127 log.go:172] (0xc000154630) (0xc000650dc0) Stream removed, broadcasting: 3\nI0211 11:38:14.416237 1127 log.go:172] (0xc000750640) (1) Data frame handling\nI0211 11:38:14.416261 1127 log.go:172] (0xc000750640) (1) Data frame sent\nI0211 11:38:14.416274 1127 log.go:172] (0xc000154630) (0xc0007506e0) Stream removed, broadcasting: 5\nI0211 11:38:14.416291 1127 log.go:172] (0xc000154630) (0xc000750640) Stream removed, broadcasting: 1\nI0211 11:38:14.416305 1127 log.go:172] (0xc000154630) Go away received\nI0211 11:38:14.416620 1127 log.go:172] (0xc000154630) (0xc000750640) Stream removed, broadcasting: 1\nI0211 11:38:14.416637 1127 log.go:172] (0xc000154630) (0xc000650dc0) Stream removed, broadcasting: 3\nI0211 11:38:14.416651 1127 log.go:172] (0xc000154630) (0xc0007506e0) Stream removed, broadcasting: 5\n" Feb 11 11:38:14.429: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 11 11:38:14.429: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 11 11:38:14.449: INFO: Found 1 stateful pods, waiting for 3 Feb 11 11:38:24.497: INFO: Found 2 stateful pods, waiting for 3 Feb 11 11:38:34.488: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 11 11:38:34.488: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 11 11:38:34.488: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 11 11:38:44.481: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 11 11:38:44.482: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 11 11:38:44.482: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 11 11:38:44.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4wnzx ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 11 11:38:45.035: INFO: stderr: "I0211 11:38:44.752986 1149 log.go:172] (0xc000508370) (0xc00071a640) Create stream\nI0211 11:38:44.753118 1149 log.go:172] (0xc000508370) (0xc00071a640) Stream added, broadcasting: 1\nI0211 11:38:44.759685 1149 log.go:172] (0xc000508370) Reply frame received for 1\nI0211 11:38:44.759739 1149 log.go:172] (0xc000508370) (0xc000640dc0) Create stream\nI0211 11:38:44.759746 1149 log.go:172] (0xc000508370) (0xc000640dc0) Stream added, broadcasting: 3\nI0211 11:38:44.761302 1149 log.go:172] (0xc000508370) Reply frame received for 3\nI0211 11:38:44.761346 1149 log.go:172] (0xc000508370) (0xc0002d8000) Create stream\nI0211 11:38:44.761357 1149 log.go:172] (0xc000508370) (0xc0002d8000) Stream added, broadcasting: 5\nI0211 11:38:44.762594 1149 log.go:172] (0xc000508370) Reply frame received for 5\nI0211 11:38:44.882440 1149 log.go:172] (0xc000508370) Data frame received for 3\nI0211 11:38:44.882521 1149 log.go:172] (0xc000640dc0) (3) Data frame handling\nI0211 11:38:44.882571 1149 log.go:172] (0xc000640dc0) (3) Data frame sent\nI0211 11:38:45.025231 1149 log.go:172] (0xc000508370) (0xc000640dc0) Stream removed, broadcasting: 3\nI0211 11:38:45.025342 1149 log.go:172] (0xc000508370) Data frame received for 1\nI0211 11:38:45.025370 1149 log.go:172] (0xc00071a640) (1) Data frame handling\nI0211 11:38:45.025380 1149 log.go:172] (0xc00071a640) (1) Data frame sent\nI0211 11:38:45.025575 1149 log.go:172] (0xc000508370) (0xc00071a640) Stream removed, broadcasting: 1\nI0211 11:38:45.025765 1149 log.go:172] (0xc000508370) (0xc0002d8000) Stream removed, broadcasting: 5\nI0211 11:38:45.025806 1149 log.go:172] (0xc000508370) Go away received\nI0211 11:38:45.026305 1149 log.go:172] (0xc000508370) (0xc00071a640) Stream removed, broadcasting: 1\nI0211 11:38:45.026324 1149 log.go:172] (0xc000508370) (0xc000640dc0) Stream removed, broadcasting: 3\nI0211 11:38:45.026339 1149 log.go:172] (0xc000508370) (0xc0002d8000) Stream removed, broadcasting: 5\n" Feb 11 11:38:45.035: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 11 11:38:45.035: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 11 11:38:45.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4wnzx ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 11 11:38:45.614: INFO: stderr: "I0211 11:38:45.218809 1170 log.go:172] (0xc0001386e0) (0xc00075c640) Create stream\nI0211 11:38:45.219006 1170 log.go:172] (0xc0001386e0) (0xc00075c640) Stream added, broadcasting: 1\nI0211 11:38:45.225144 1170 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0211 11:38:45.225178 1170 log.go:172] (0xc0001386e0) (0xc000664dc0) Create stream\nI0211 11:38:45.225188 1170 log.go:172] (0xc0001386e0) (0xc000664dc0) Stream added, broadcasting: 3\nI0211 11:38:45.227270 1170 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0211 11:38:45.227334 1170 log.go:172] (0xc0001386e0) (0xc000664f00) Create stream\nI0211 11:38:45.227342 1170 log.go:172] (0xc0001386e0) (0xc000664f00) Stream added, broadcasting: 5\nI0211 11:38:45.228699 1170 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0211 11:38:45.455663 1170 log.go:172] (0xc0001386e0) Data frame received for 3\nI0211 11:38:45.455749 1170 log.go:172] (0xc000664dc0) (3) Data frame handling\nI0211 11:38:45.455784 1170 log.go:172] (0xc000664dc0) (3) Data frame sent\nI0211 11:38:45.603949 1170 log.go:172] (0xc0001386e0) (0xc000664dc0) Stream removed, broadcasting: 3\nI0211 11:38:45.604495 1170 log.go:172] (0xc0001386e0) Data frame received for 1\nI0211 11:38:45.604598 1170 log.go:172] (0xc0001386e0) (0xc000664f00) Stream removed, broadcasting: 5\nI0211 11:38:45.604673 1170 log.go:172] (0xc00075c640) (1) Data frame handling\nI0211 11:38:45.604690 1170 log.go:172] (0xc00075c640) (1) Data frame sent\nI0211 11:38:45.604699 1170 log.go:172] (0xc0001386e0) (0xc00075c640) Stream removed, broadcasting: 1\nI0211 11:38:45.604712 1170 log.go:172] (0xc0001386e0) Go away received\nI0211 11:38:45.605410 1170 log.go:172] (0xc0001386e0) (0xc00075c640) Stream removed, broadcasting: 1\nI0211 11:38:45.605446 1170 log.go:172] (0xc0001386e0) (0xc000664dc0) Stream removed, broadcasting: 3\nI0211 11:38:45.605461 1170 log.go:172] (0xc0001386e0) (0xc000664f00) Stream removed, broadcasting: 5\n" Feb 11 11:38:45.614: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 11 11:38:45.614: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 11 11:38:45.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4wnzx ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 11 11:38:46.218: INFO: stderr: "I0211 11:38:45.800486 1192 log.go:172] (0xc00013a630) (0xc0006854a0) Create stream\nI0211 11:38:45.800645 1192 log.go:172] (0xc00013a630) (0xc0006854a0) Stream added, broadcasting: 1\nI0211 11:38:45.810078 1192 log.go:172] (0xc00013a630) Reply frame received for 1\nI0211 11:38:45.810176 1192 log.go:172] (0xc00013a630) (0xc000290000) Create stream\nI0211 11:38:45.810203 1192 log.go:172] (0xc00013a630) (0xc000290000) Stream added, broadcasting: 3\nI0211 11:38:45.811488 1192 log.go:172] (0xc00013a630) Reply frame received for 3\nI0211 11:38:45.811508 1192 log.go:172] (0xc00013a630) (0xc0002900a0) Create stream\nI0211 11:38:45.811515 1192 log.go:172] (0xc00013a630) (0xc0002900a0) Stream added, broadcasting: 5\nI0211 11:38:45.812190 1192 log.go:172] (0xc00013a630) Reply frame received for 5\nI0211 11:38:46.095394 1192 log.go:172] (0xc00013a630) Data frame received for 3\nI0211 11:38:46.095434 1192 log.go:172] (0xc000290000) (3) Data frame handling\nI0211 11:38:46.095452 1192 log.go:172] (0xc000290000) (3) Data frame sent\nI0211 11:38:46.206407 1192 log.go:172] (0xc00013a630) Data frame received for 1\nI0211 11:38:46.206508 1192 log.go:172] (0xc00013a630) (0xc000290000) Stream removed, broadcasting: 3\nI0211 11:38:46.206603 1192 log.go:172] (0xc0006854a0) (1) Data frame handling\nI0211 11:38:46.206627 1192 log.go:172] (0xc0006854a0) (1) Data frame sent\nI0211 11:38:46.206639 1192 log.go:172] (0xc00013a630) (0xc0006854a0) Stream removed, broadcasting: 1\nI0211 11:38:46.207424 1192 log.go:172] (0xc00013a630) (0xc0002900a0) Stream removed, broadcasting: 5\nI0211 11:38:46.207489 1192 log.go:172] (0xc00013a630) (0xc0006854a0) Stream removed, broadcasting: 1\nI0211 11:38:46.207500 1192 log.go:172] (0xc00013a630) (0xc000290000) Stream removed, broadcasting: 3\nI0211 11:38:46.207510 1192 log.go:172] (0xc00013a630) (0xc0002900a0) Stream removed, broadcasting: 5\nI0211 11:38:46.207734 1192 log.go:172] (0xc00013a630) Go away received\n" Feb 11 11:38:46.219: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 11 11:38:46.219: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 11 11:38:46.219: INFO: Waiting for statefulset status.replicas updated to 0 Feb 11 11:38:46.236: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 11 11:38:56.267: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 11 11:38:56.267: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 11 11:38:56.267: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 11 11:38:56.441: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999993619s Feb 11 11:38:57.465: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978649361s Feb 11 11:38:58.529: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.954796808s Feb 11 11:38:59.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.891154992s Feb 11 11:39:00.564: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.873367509s Feb 11 11:39:01.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.856189034s Feb 11 11:39:03.261: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.830358792s Feb 11 11:39:04.275: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.15923463s Feb 11 11:39:06.754: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.145249955s STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-4wnzx Feb 11 11:39:07.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4wnzx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:39:08.411: INFO: stderr: "I0211 11:39:08.066214 1214 log.go:172] (0xc000138630) (0xc0006f0640) Create stream\nI0211 11:39:08.066335 1214 log.go:172] (0xc000138630) (0xc0006f0640) Stream added, broadcasting: 1\nI0211 11:39:08.072203 1214 log.go:172] (0xc000138630) Reply frame received for 1\nI0211 11:39:08.072285 1214 log.go:172] (0xc000138630) (0xc0006f06e0) Create stream\nI0211 11:39:08.072308 1214 log.go:172] (0xc000138630) (0xc0006f06e0) Stream added, broadcasting: 3\nI0211 11:39:08.073625 1214 log.go:172] (0xc000138630) Reply frame received for 3\nI0211 11:39:08.073646 1214 log.go:172] (0xc000138630) (0xc0007d6dc0) Create stream\nI0211 11:39:08.073656 1214 log.go:172] (0xc000138630) (0xc0007d6dc0) Stream added, broadcasting: 5\nI0211 11:39:08.075175 1214 log.go:172] (0xc000138630) Reply frame received for 5\nI0211 11:39:08.219108 1214 log.go:172] (0xc000138630) Data frame received for 3\nI0211 11:39:08.219221 1214 log.go:172] (0xc0006f06e0) (3) Data frame handling\nI0211 11:39:08.219250 1214 log.go:172] (0xc0006f06e0) (3) Data frame sent\nI0211 11:39:08.393522 1214 log.go:172] (0xc000138630) (0xc0006f06e0) Stream removed, broadcasting: 3\nI0211 11:39:08.394015 1214 log.go:172] (0xc000138630) Data frame received for 1\nI0211 11:39:08.394140 1214 log.go:172] (0xc0006f0640) (1) Data frame handling\nI0211 11:39:08.394196 1214 log.go:172] (0xc0006f0640) (1) Data frame sent\nI0211 11:39:08.394228 1214 log.go:172] (0xc000138630) (0xc0006f0640) Stream removed, broadcasting: 1\nI0211 11:39:08.394313 1214 log.go:172] (0xc000138630) (0xc0007d6dc0) Stream removed, broadcasting: 5\nI0211 11:39:08.394464 1214 log.go:172] (0xc000138630) Go away received\nI0211 11:39:08.395272 1214 log.go:172] (0xc000138630) (0xc0006f0640) Stream removed, broadcasting: 1\nI0211 11:39:08.395299 1214 log.go:172] (0xc000138630) (0xc0006f06e0) Stream removed, broadcasting: 3\nI0211 11:39:08.395322 1214 log.go:172] (0xc000138630) (0xc0007d6dc0) Stream removed, broadcasting: 5\n" Feb 11 11:39:08.412: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 11 11:39:08.412: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 11 11:39:08.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4wnzx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:39:08.970: INFO: stderr: "I0211 11:39:08.663928 1236 log.go:172] (0xc000138790) (0xc0006534a0) Create stream\nI0211 11:39:08.664381 1236 log.go:172] (0xc000138790) (0xc0006534a0) Stream added, broadcasting: 1\nI0211 11:39:08.671392 1236 log.go:172] (0xc000138790) Reply frame received for 1\nI0211 11:39:08.671424 1236 log.go:172] (0xc000138790) (0xc000714000) Create stream\nI0211 11:39:08.671433 1236 log.go:172] (0xc000138790) (0xc000714000) Stream added, broadcasting: 3\nI0211 11:39:08.672788 1236 log.go:172] (0xc000138790) Reply frame received for 3\nI0211 11:39:08.672826 1236 log.go:172] (0xc000138790) (0xc000392000) Create stream\nI0211 11:39:08.672838 1236 log.go:172] (0xc000138790) (0xc000392000) Stream added, broadcasting: 5\nI0211 11:39:08.673979 1236 log.go:172] (0xc000138790) Reply frame received for 5\nI0211 11:39:08.836563 1236 log.go:172] (0xc000138790) Data frame received for 3\nI0211 11:39:08.836613 1236 log.go:172] (0xc000714000) (3) Data frame handling\nI0211 11:39:08.836632 1236 log.go:172] (0xc000714000) (3) Data frame sent\nI0211 11:39:08.961078 1236 log.go:172] (0xc000138790) (0xc000714000) Stream removed, broadcasting: 3\nI0211 11:39:08.961198 1236 log.go:172] (0xc000138790) Data frame received for 1\nI0211 11:39:08.961214 1236 log.go:172] (0xc0006534a0) (1) Data frame handling\nI0211 11:39:08.961226 1236 log.go:172] (0xc0006534a0) (1) Data frame sent\nI0211 11:39:08.961368 1236 log.go:172] (0xc000138790) (0xc000392000) Stream removed, broadcasting: 5\nI0211 11:39:08.961401 1236 log.go:172] (0xc000138790) (0xc0006534a0) Stream removed, broadcasting: 1\nI0211 11:39:08.961419 1236 log.go:172] (0xc000138790) Go away received\nI0211 11:39:08.961617 1236 log.go:172] (0xc000138790) (0xc0006534a0) Stream removed, broadcasting: 1\nI0211 11:39:08.961631 1236 log.go:172] (0xc000138790) (0xc000714000) Stream removed, broadcasting: 3\nI0211 11:39:08.961635 1236 log.go:172] (0xc000138790) (0xc000392000) Stream removed, broadcasting: 5\n" Feb 11 11:39:08.971: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 11 11:39:08.971: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 11 11:39:08.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4wnzx ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:39:09.470: INFO: stderr: "I0211 11:39:09.140247 1257 log.go:172] (0xc000376160) (0xc0007345a0) Create stream\nI0211 11:39:09.140408 1257 log.go:172] (0xc000376160) (0xc0007345a0) Stream added, broadcasting: 1\nI0211 11:39:09.144202 1257 log.go:172] (0xc000376160) Reply frame received for 1\nI0211 11:39:09.144225 1257 log.go:172] (0xc000376160) (0xc0005e8e60) Create stream\nI0211 11:39:09.144231 1257 log.go:172] (0xc000376160) (0xc0005e8e60) Stream added, broadcasting: 3\nI0211 11:39:09.144934 1257 log.go:172] (0xc000376160) Reply frame received for 3\nI0211 11:39:09.144972 1257 log.go:172] (0xc000376160) (0xc00011c000) Create stream\nI0211 11:39:09.144979 1257 log.go:172] (0xc000376160) (0xc00011c000) Stream added, broadcasting: 5\nI0211 11:39:09.145786 1257 log.go:172] (0xc000376160) Reply frame received for 5\nI0211 11:39:09.230680 1257 log.go:172] (0xc000376160) Data frame received for 3\nI0211 11:39:09.230750 1257 log.go:172] (0xc0005e8e60) (3) Data frame handling\nI0211 11:39:09.230773 1257 log.go:172] (0xc0005e8e60) (3) Data frame sent\nI0211 11:39:09.456583 1257 log.go:172] (0xc000376160) (0xc0005e8e60) Stream removed, broadcasting: 3\nI0211 11:39:09.456804 1257 log.go:172] (0xc000376160) Data frame received for 1\nI0211 11:39:09.457022 1257 log.go:172] (0xc000376160) (0xc00011c000) Stream removed, broadcasting: 5\nI0211 11:39:09.457085 1257 log.go:172] (0xc0007345a0) (1) Data frame handling\nI0211 11:39:09.457119 1257 log.go:172] (0xc0007345a0) (1) Data frame sent\nI0211 11:39:09.457129 1257 log.go:172] (0xc000376160) (0xc0007345a0) Stream removed, broadcasting: 1\nI0211 11:39:09.457143 1257 log.go:172] (0xc000376160) Go away received\nI0211 11:39:09.457587 1257 log.go:172] (0xc000376160) (0xc0007345a0) Stream removed, broadcasting: 1\nI0211 11:39:09.457617 1257 log.go:172] (0xc000376160) (0xc0005e8e60) Stream removed, broadcasting: 3\nI0211 11:39:09.457639 1257 log.go:172] (0xc000376160) (0xc00011c000) Stream removed, broadcasting: 5\n" Feb 11 11:39:09.470: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 11 11:39:09.470: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 11 11:39:09.471: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 11 11:39:39.552: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4wnzx Feb 11 11:39:39.566: INFO: Scaling statefulset ss to 0 Feb 11 11:39:39.591: INFO: Waiting for statefulset status.replicas updated to 0 Feb 11 11:39:39.596: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:39:39.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-4wnzx" for this suite. Feb 11 11:39:47.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:39:47.891: INFO: namespace: e2e-tests-statefulset-4wnzx, resource: bindings, ignored listing per whitelist Feb 11 11:39:47.927: INFO: namespace e2e-tests-statefulset-4wnzx deletion completed in 8.292470611s • [SLOW TEST:135.143 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:39:47.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-35191a04-4cc3-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 11 11:39:48.221: INFO: Waiting up to 5m0s for pod "pod-configmaps-351cdff7-4cc3-11ea-a6e3-0242ac110005" in namespace "e2e-tests-configmap-gt6wb" to be "success or failure" Feb 11 11:39:48.237: INFO: Pod "pod-configmaps-351cdff7-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.810547ms Feb 11 11:39:50.445: INFO: Pod "pod-configmaps-351cdff7-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223883829s Feb 11 11:39:52.480: INFO: Pod "pod-configmaps-351cdff7-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259050623s Feb 11 11:39:54.550: INFO: Pod "pod-configmaps-351cdff7-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.329119215s Feb 11 11:39:56.596: INFO: Pod "pod-configmaps-351cdff7-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.375027204s Feb 11 11:39:58.639: INFO: Pod "pod-configmaps-351cdff7-4cc3-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.417957077s STEP: Saw pod success Feb 11 11:39:58.639: INFO: Pod "pod-configmaps-351cdff7-4cc3-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:39:58.646: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-351cdff7-4cc3-11ea-a6e3-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 11 11:39:58.735: INFO: Waiting for pod pod-configmaps-351cdff7-4cc3-11ea-a6e3-0242ac110005 to disappear Feb 11 11:39:58.753: INFO: Pod pod-configmaps-351cdff7-4cc3-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:39:58.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gt6wb" for this suite. Feb 11 11:40:04.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:40:05.179: INFO: namespace: e2e-tests-configmap-gt6wb, resource: bindings, ignored listing per whitelist Feb 11 11:40:05.186: INFO: namespace e2e-tests-configmap-gt6wb deletion completed in 6.421825296s • [SLOW TEST:17.258 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:40:05.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-mlqh STEP: Creating a pod to test atomic-volume-subpath Feb 11 11:40:05.633: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mlqh" in namespace "e2e-tests-subpath-9cgmg" to be "success or failure" Feb 11 11:40:05.667: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Pending", Reason="", readiness=false. Elapsed: 34.345939ms Feb 11 11:40:07.754: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120916069s Feb 11 11:40:09.785: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152387925s Feb 11 11:40:12.143: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.510440182s Feb 11 11:40:14.213: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.580410252s Feb 11 11:40:16.226: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.592761277s Feb 11 11:40:18.242: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.608871919s Feb 11 11:40:20.261: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.62848703s Feb 11 11:40:22.278: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Running", Reason="", readiness=false. Elapsed: 16.644778984s Feb 11 11:40:24.288: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Running", Reason="", readiness=false. Elapsed: 18.655134604s Feb 11 11:40:26.306: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Running", Reason="", readiness=false. Elapsed: 20.673182888s Feb 11 11:40:28.383: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Running", Reason="", readiness=false. Elapsed: 22.750001098s Feb 11 11:40:30.397: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Running", Reason="", readiness=false. Elapsed: 24.763778386s Feb 11 11:40:32.414: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Running", Reason="", readiness=false. Elapsed: 26.781476294s Feb 11 11:40:34.433: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Running", Reason="", readiness=false. Elapsed: 28.799743574s Feb 11 11:40:36.452: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Running", Reason="", readiness=false. Elapsed: 30.819187752s Feb 11 11:40:38.503: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Running", Reason="", readiness=false. Elapsed: 32.870526708s Feb 11 11:40:40.532: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Running", Reason="", readiness=false. Elapsed: 34.89927911s Feb 11 11:40:42.644: INFO: Pod "pod-subpath-test-secret-mlqh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.010608085s STEP: Saw pod success Feb 11 11:40:42.644: INFO: Pod "pod-subpath-test-secret-mlqh" satisfied condition "success or failure" Feb 11 11:40:42.668: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-mlqh container test-container-subpath-secret-mlqh: STEP: delete the pod Feb 11 11:40:42.963: INFO: Waiting for pod pod-subpath-test-secret-mlqh to disappear Feb 11 11:40:43.000: INFO: Pod pod-subpath-test-secret-mlqh no longer exists STEP: Deleting pod pod-subpath-test-secret-mlqh Feb 11 11:40:43.000: INFO: Deleting pod "pod-subpath-test-secret-mlqh" in namespace "e2e-tests-subpath-9cgmg" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:40:43.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-9cgmg" for this suite. Feb 11 11:40:51.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:40:51.254: INFO: namespace: e2e-tests-subpath-9cgmg, resource: bindings, ignored listing per whitelist Feb 11 11:40:51.260: INFO: namespace e2e-tests-subpath-9cgmg deletion completed in 8.24774598s • [SLOW TEST:46.074 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:40:51.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 11 11:40:51.512: INFO: Waiting up to 5m0s for pod "pod-5adb5130-4cc3-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-6p2jb" to be "success or failure" Feb 11 11:40:51.530: INFO: Pod "pod-5adb5130-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.749385ms Feb 11 11:40:53.542: INFO: Pod "pod-5adb5130-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02960821s Feb 11 11:40:55.552: INFO: Pod "pod-5adb5130-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03938212s Feb 11 11:40:57.568: INFO: Pod "pod-5adb5130-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055186628s Feb 11 11:40:59.915: INFO: Pod "pod-5adb5130-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.402404221s Feb 11 11:41:01.936: INFO: Pod "pod-5adb5130-4cc3-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.42357359s STEP: Saw pod success Feb 11 11:41:01.936: INFO: Pod "pod-5adb5130-4cc3-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:41:01.950: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5adb5130-4cc3-11ea-a6e3-0242ac110005 container test-container: STEP: delete the pod Feb 11 11:41:02.246: INFO: Waiting for pod pod-5adb5130-4cc3-11ea-a6e3-0242ac110005 to disappear Feb 11 11:41:02.286: INFO: Pod pod-5adb5130-4cc3-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:41:02.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6p2jb" for this suite. Feb 11 11:41:08.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:41:08.709: INFO: namespace: e2e-tests-emptydir-6p2jb, resource: bindings, ignored listing per whitelist Feb 11 11:41:08.816: INFO: namespace e2e-tests-emptydir-6p2jb deletion completed in 6.523065592s • [SLOW TEST:17.556 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:41:08.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-qjswg STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qjswg to expose endpoints map[] Feb 11 11:41:09.187: INFO: Get endpoints failed (8.332591ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 11 11:41:10.249: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qjswg exposes endpoints map[] (1.070519516s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-qjswg STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qjswg to expose endpoints map[pod1:[100]] Feb 11 11:41:15.129: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.848700325s elapsed, will retry) Feb 11 11:41:18.196: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qjswg exposes endpoints map[pod1:[100]] (7.91567097s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-qjswg STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qjswg to expose endpoints map[pod1:[100] pod2:[101]] Feb 11 11:41:22.679: INFO: Unexpected endpoints: found map[660c58c9-4cc3-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.469306196s elapsed, will retry) Feb 11 11:41:26.996: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qjswg exposes endpoints map[pod2:[101] pod1:[100]] (8.786494227s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-qjswg STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qjswg to expose endpoints map[pod2:[101]] Feb 11 11:41:28.084: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qjswg exposes endpoints map[pod2:[101]] (1.075827441s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-qjswg STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qjswg to expose endpoints map[] Feb 11 11:41:29.132: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qjswg exposes endpoints map[] (1.031615334s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:41:30.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-qjswg" for this suite. Feb 11 11:41:54.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:41:54.407: INFO: namespace: e2e-tests-services-qjswg, resource: bindings, ignored listing per whitelist Feb 11 11:41:54.750: INFO: namespace e2e-tests-services-qjswg deletion completed in 24.496422862s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:45.934 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:41:54.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-cfp7 STEP: Creating a pod to test atomic-volume-subpath Feb 11 11:41:55.053: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cfp7" in namespace "e2e-tests-subpath-6r6g2" to be "success or failure" Feb 11 11:41:55.065: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.420448ms Feb 11 11:41:57.412: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.357998354s Feb 11 11:41:59.432: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378235091s Feb 11 11:42:01.584: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.530586036s Feb 11 11:42:03.716: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.66270733s Feb 11 11:42:05.731: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.677075391s Feb 11 11:42:07.973: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.919670639s Feb 11 11:42:09.992: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.938559996s Feb 11 11:42:12.014: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Running", Reason="", readiness=false. Elapsed: 16.960025241s Feb 11 11:42:14.039: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Running", Reason="", readiness=false. Elapsed: 18.985174678s Feb 11 11:42:16.058: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Running", Reason="", readiness=false. Elapsed: 21.004898427s Feb 11 11:42:18.080: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Running", Reason="", readiness=false. Elapsed: 23.026608367s Feb 11 11:42:20.120: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Running", Reason="", readiness=false. Elapsed: 25.06692513s Feb 11 11:42:22.153: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Running", Reason="", readiness=false. Elapsed: 27.099407477s Feb 11 11:42:24.169: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Running", Reason="", readiness=false. Elapsed: 29.115904571s Feb 11 11:42:26.180: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Running", Reason="", readiness=false. Elapsed: 31.12649161s Feb 11 11:42:28.411: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Running", Reason="", readiness=false. Elapsed: 33.357561554s Feb 11 11:42:30.426: INFO: Pod "pod-subpath-test-projected-cfp7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.372595907s STEP: Saw pod success Feb 11 11:42:30.426: INFO: Pod "pod-subpath-test-projected-cfp7" satisfied condition "success or failure" Feb 11 11:42:30.434: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-cfp7 container test-container-subpath-projected-cfp7: STEP: delete the pod Feb 11 11:42:30.812: INFO: Waiting for pod pod-subpath-test-projected-cfp7 to disappear Feb 11 11:42:30.841: INFO: Pod pod-subpath-test-projected-cfp7 no longer exists STEP: Deleting pod pod-subpath-test-projected-cfp7 Feb 11 11:42:30.841: INFO: Deleting pod "pod-subpath-test-projected-cfp7" in namespace "e2e-tests-subpath-6r6g2" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:42:30.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-6r6g2" for this suite. Feb 11 11:42:37.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:42:37.986: INFO: namespace: e2e-tests-subpath-6r6g2, resource: bindings, ignored listing per whitelist Feb 11 11:42:38.039: INFO: namespace e2e-tests-subpath-6r6g2 deletion completed in 7.166624399s • [SLOW TEST:43.288 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:42:38.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Feb 11 11:42:38.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-grsxk' Feb 11 11:42:40.799: INFO: stderr: "" Feb 11 11:42:40.799: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Feb 11 11:42:41.831: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:42:41.831: INFO: Found 0 / 1 Feb 11 11:42:42.813: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:42:42.813: INFO: Found 0 / 1 Feb 11 11:42:43.817: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:42:43.817: INFO: Found 0 / 1 Feb 11 11:42:44.815: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:42:44.815: INFO: Found 0 / 1 Feb 11 11:42:45.924: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:42:45.924: INFO: Found 0 / 1 Feb 11 11:42:46.811: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:42:46.812: INFO: Found 0 / 1 Feb 11 11:42:47.811: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:42:47.811: INFO: Found 0 / 1 Feb 11 11:42:48.832: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:42:48.832: INFO: Found 0 / 1 Feb 11 11:42:49.812: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:42:49.813: INFO: Found 1 / 1 Feb 11 11:42:49.813: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 11 11:42:49.817: INFO: Selector matched 1 pods for map[app:redis] Feb 11 11:42:49.817: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 11 11:42:49.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-22r9h redis-master --namespace=e2e-tests-kubectl-grsxk' Feb 11 11:42:49.981: INFO: stderr: "" Feb 11 11:42:49.981: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 Feb 11:42:47.966 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Feb 11:42:47.966 # Server started, Redis version 3.2.12\n1:M 11 Feb 11:42:47.967 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Feb 11:42:47.967 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 11 11:42:49.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-22r9h redis-master --namespace=e2e-tests-kubectl-grsxk --tail=1' Feb 11 11:42:50.150: INFO: stderr: "" Feb 11 11:42:50.150: INFO: stdout: "1:M 11 Feb 11:42:47.967 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 11 11:42:50.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-22r9h redis-master --namespace=e2e-tests-kubectl-grsxk --limit-bytes=1' Feb 11 11:42:50.304: INFO: stderr: "" Feb 11 11:42:50.304: INFO: stdout: " " STEP: exposing timestamps Feb 11 11:42:50.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-22r9h redis-master --namespace=e2e-tests-kubectl-grsxk --tail=1 --timestamps' Feb 11 11:42:50.526: INFO: stderr: "" Feb 11 11:42:50.526: INFO: stdout: "2020-02-11T11:42:47.967654881Z 1:M 11 Feb 11:42:47.967 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 11 11:42:53.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-22r9h redis-master --namespace=e2e-tests-kubectl-grsxk --since=1s' Feb 11 11:42:53.248: INFO: stderr: "" Feb 11 11:42:53.248: INFO: stdout: "" Feb 11 11:42:53.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-22r9h redis-master --namespace=e2e-tests-kubectl-grsxk --since=24h' Feb 11 11:42:53.463: INFO: stderr: "" Feb 11 11:42:53.463: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 Feb 11:42:47.966 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Feb 11:42:47.966 # Server started, Redis version 3.2.12\n1:M 11 Feb 11:42:47.967 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Feb 11:42:47.967 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Feb 11 11:42:53.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-grsxk' Feb 11 11:42:53.773: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 11 11:42:53.773: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 11 11:42:53.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-grsxk' Feb 11 11:42:54.017: INFO: stderr: "No resources found.\n" Feb 11 11:42:54.017: INFO: stdout: "" Feb 11 11:42:54.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-grsxk -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 11 11:42:54.288: INFO: stderr: "" Feb 11 11:42:54.288: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:42:54.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-grsxk" for this suite. Feb 11 11:43:00.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:43:00.519: INFO: namespace: e2e-tests-kubectl-grsxk, resource: bindings, ignored listing per whitelist Feb 11 11:43:00.647: INFO: namespace e2e-tests-kubectl-grsxk deletion completed in 6.336407511s • [SLOW TEST:22.609 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:43:00.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-a7f62fe6-4cc3-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume secrets Feb 11 11:43:00.882: INFO: Waiting up to 5m0s for pod "pod-secrets-a7f77c83-4cc3-11ea-a6e3-0242ac110005" in namespace "e2e-tests-secrets-2lq6t" to be "success or failure" Feb 11 11:43:00.900: INFO: Pod "pod-secrets-a7f77c83-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.178127ms Feb 11 11:43:02.986: INFO: Pod "pod-secrets-a7f77c83-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103867448s Feb 11 11:43:05.012: INFO: Pod "pod-secrets-a7f77c83-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129950463s Feb 11 11:43:07.024: INFO: Pod "pod-secrets-a7f77c83-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142017811s Feb 11 11:43:09.363: INFO: Pod "pod-secrets-a7f77c83-4cc3-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.481139222s Feb 11 11:43:12.023: INFO: Pod "pod-secrets-a7f77c83-4cc3-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.141051067s STEP: Saw pod success Feb 11 11:43:12.023: INFO: Pod "pod-secrets-a7f77c83-4cc3-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:43:12.035: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a7f77c83-4cc3-11ea-a6e3-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 11 11:43:12.289: INFO: Waiting for pod pod-secrets-a7f77c83-4cc3-11ea-a6e3-0242ac110005 to disappear Feb 11 11:43:12.296: INFO: Pod pod-secrets-a7f77c83-4cc3-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:43:12.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2lq6t" for this suite. Feb 11 11:43:18.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:43:18.675: INFO: namespace: e2e-tests-secrets-2lq6t, resource: bindings, ignored listing per whitelist Feb 11 11:43:18.715: INFO: namespace e2e-tests-secrets-2lq6t deletion completed in 6.403241033s • [SLOW TEST:18.066 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:43:18.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 11 11:43:18.871: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:43:34.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-vj7vm" for this suite. Feb 11 11:43:40.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:43:40.392: INFO: namespace: e2e-tests-init-container-vj7vm, resource: bindings, ignored listing per whitelist Feb 11 11:43:40.443: INFO: namespace e2e-tests-init-container-vj7vm deletion completed in 6.246803705s • [SLOW TEST:21.727 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:43:40.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 11 11:43:40.845: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 11 11:43:40.870: INFO: Waiting for terminating namespaces to be deleted... Feb 11 11:43:40.873: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 11 11:43:40.887: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:43:40.887: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 11 11:43:40.887: INFO: Container weave ready: true, restart count 0 Feb 11 11:43:40.887: INFO: Container weave-npc ready: true, restart count 0 Feb 11 11:43:40.887: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 11 11:43:40.887: INFO: Container coredns ready: true, restart count 0 Feb 11 11:43:40.887: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:43:40.887: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:43:40.887: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:43:40.887: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 11 11:43:40.887: INFO: Container coredns ready: true, restart count 0 Feb 11 11:43:40.887: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 11 11:43:40.887: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Feb 11 11:43:40.936: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 11 11:43:40.936: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 11 11:43:40.936: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 11 11:43:40.936: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Feb 11 11:43:40.936: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Feb 11 11:43:40.936: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 11 11:43:40.936: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 11 11:43:40.936: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-bfdb1fdc-4cc3-11ea-a6e3-0242ac110005.15f256675a7d21bd], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-r69fz/filler-pod-bfdb1fdc-4cc3-11ea-a6e3-0242ac110005 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfdb1fdc-4cc3-11ea-a6e3-0242ac110005.15f256687e4d2157], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfdb1fdc-4cc3-11ea-a6e3-0242ac110005.15f25669022658c0], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfdb1fdc-4cc3-11ea-a6e3-0242ac110005.15f2566939fb9dbf], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f25669b1d5a7fb], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:43:52.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-r69fz" for this suite. Feb 11 11:44:00.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:44:00.752: INFO: namespace: e2e-tests-sched-pred-r69fz, resource: bindings, ignored listing per whitelist Feb 11 11:44:00.837: INFO: namespace e2e-tests-sched-pred-r69fz deletion completed in 8.389415509s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:20.395 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:44:00.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 11 11:44:01.057: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 11 11:44:01.069: INFO: Waiting for terminating namespaces to be deleted... Feb 11 11:44:01.072: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 11 11:44:01.084: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 11 11:44:01.084: INFO: Container kube-proxy ready: true, restart count 0 Feb 11 11:44:01.084: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:44:01.084: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 11 11:44:01.084: INFO: Container weave ready: true, restart count 0 Feb 11 11:44:01.084: INFO: Container weave-npc ready: true, restart count 0 Feb 11 11:44:01.084: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 11 11:44:01.084: INFO: Container coredns ready: true, restart count 0 Feb 11 11:44:01.084: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:44:01.084: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:44:01.084: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 11 11:44:01.084: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 11 11:44:01.084: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d2692eae-4cc3-11ea-a6e3-0242ac110005 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d2692eae-4cc3-11ea-a6e3-0242ac110005 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-d2692eae-4cc3-11ea-a6e3-0242ac110005 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:44:22.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-vc6q5" for this suite. Feb 11 11:44:46.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:44:46.643: INFO: namespace: e2e-tests-sched-pred-vc6q5, resource: bindings, ignored listing per whitelist Feb 11 11:44:46.692: INFO: namespace e2e-tests-sched-pred-vc6q5 deletion completed in 24.248440395s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:45.853 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:44:46.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-74h9s STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 11 11:44:46.907: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 11 11:45:19.217: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-74h9s PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 11:45:19.217: INFO: >>> kubeConfig: /root/.kube/config I0211 11:45:19.311848 9 log.go:172] (0xc00092fa20) (0xc0013521e0) Create stream I0211 11:45:19.312026 9 log.go:172] (0xc00092fa20) (0xc0013521e0) Stream added, broadcasting: 1 I0211 11:45:19.321573 9 log.go:172] (0xc00092fa20) Reply frame received for 1 I0211 11:45:19.321640 9 log.go:172] (0xc00092fa20) (0xc001531040) Create stream I0211 11:45:19.321658 9 log.go:172] (0xc00092fa20) (0xc001531040) Stream added, broadcasting: 3 I0211 11:45:19.323088 9 log.go:172] (0xc00092fa20) Reply frame received for 3 I0211 11:45:19.323144 9 log.go:172] (0xc00092fa20) (0xc001352280) Create stream I0211 11:45:19.323160 9 log.go:172] (0xc00092fa20) (0xc001352280) Stream added, broadcasting: 5 I0211 11:45:19.325824 9 log.go:172] (0xc00092fa20) Reply frame received for 5 I0211 11:45:19.499117 9 log.go:172] (0xc00092fa20) Data frame received for 3 I0211 11:45:19.499226 9 log.go:172] (0xc001531040) (3) Data frame handling I0211 11:45:19.499270 9 log.go:172] (0xc001531040) (3) Data frame sent I0211 11:45:19.691833 9 log.go:172] (0xc00092fa20) Data frame received for 1 I0211 11:45:19.691980 9 log.go:172] (0xc0013521e0) (1) Data frame handling I0211 11:45:19.692044 9 log.go:172] (0xc0013521e0) (1) Data frame sent I0211 11:45:19.692089 9 log.go:172] (0xc00092fa20) (0xc0013521e0) Stream removed, broadcasting: 1 I0211 11:45:19.692673 9 log.go:172] (0xc00092fa20) (0xc001352280) Stream removed, broadcasting: 5 I0211 11:45:19.692774 9 log.go:172] (0xc00092fa20) (0xc001531040) Stream removed, broadcasting: 3 I0211 11:45:19.692859 9 log.go:172] (0xc00092fa20) (0xc0013521e0) Stream removed, broadcasting: 1 I0211 11:45:19.692881 9 log.go:172] (0xc00092fa20) (0xc001531040) Stream removed, broadcasting: 3 I0211 11:45:19.692891 9 log.go:172] (0xc00092fa20) (0xc001352280) Stream removed, broadcasting: 5 I0211 11:45:19.693103 9 log.go:172] (0xc00092fa20) Go away received Feb 11 11:45:19.693: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:45:19.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-74h9s" for this suite. Feb 11 11:45:45.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:45:46.031: INFO: namespace: e2e-tests-pod-network-test-74h9s, resource: bindings, ignored listing per whitelist Feb 11 11:45:46.103: INFO: namespace e2e-tests-pod-network-test-74h9s deletion completed in 26.364341959s • [SLOW TEST:59.410 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:45:46.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 11 11:45:46.313: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dfzn4,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfzn4/configmaps/e2e-watch-test-label-changed,UID:0a918646-4cc4-11ea-a994-fa163e34d433,ResourceVersion:21304505,Generation:0,CreationTimestamp:2020-02-11 11:45:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 11 11:45:46.314: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dfzn4,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfzn4/configmaps/e2e-watch-test-label-changed,UID:0a918646-4cc4-11ea-a994-fa163e34d433,ResourceVersion:21304506,Generation:0,CreationTimestamp:2020-02-11 11:45:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 11 11:45:46.314: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dfzn4,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfzn4/configmaps/e2e-watch-test-label-changed,UID:0a918646-4cc4-11ea-a994-fa163e34d433,ResourceVersion:21304507,Generation:0,CreationTimestamp:2020-02-11 11:45:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 11 11:45:56.513: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dfzn4,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfzn4/configmaps/e2e-watch-test-label-changed,UID:0a918646-4cc4-11ea-a994-fa163e34d433,ResourceVersion:21304521,Generation:0,CreationTimestamp:2020-02-11 11:45:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 11 11:45:56.514: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dfzn4,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfzn4/configmaps/e2e-watch-test-label-changed,UID:0a918646-4cc4-11ea-a994-fa163e34d433,ResourceVersion:21304522,Generation:0,CreationTimestamp:2020-02-11 11:45:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 11 11:45:56.514: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dfzn4,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfzn4/configmaps/e2e-watch-test-label-changed,UID:0a918646-4cc4-11ea-a994-fa163e34d433,ResourceVersion:21304523,Generation:0,CreationTimestamp:2020-02-11 11:45:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:45:56.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-dfzn4" for this suite. Feb 11 11:46:04.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:46:04.780: INFO: namespace: e2e-tests-watch-dfzn4, resource: bindings, ignored listing per whitelist Feb 11 11:46:04.813: INFO: namespace e2e-tests-watch-dfzn4 deletion completed in 8.288396048s • [SLOW TEST:18.709 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:46:04.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 11 11:46:04.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-mfhnc' Feb 11 11:46:05.191: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 11 11:46:05.192: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Feb 11 11:46:09.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-mfhnc' Feb 11 11:46:09.476: INFO: stderr: "" Feb 11 11:46:09.476: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:46:09.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mfhnc" for this suite. Feb 11 11:46:33.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:46:33.689: INFO: namespace: e2e-tests-kubectl-mfhnc, resource: bindings, ignored listing per whitelist Feb 11 11:46:33.739: INFO: namespace e2e-tests-kubectl-mfhnc deletion completed in 24.238250845s • [SLOW TEST:28.926 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:46:33.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 11 11:46:34.151: INFO: Waiting up to 5m0s for pod "var-expansion-2706a069-4cc4-11ea-a6e3-0242ac110005" in namespace "e2e-tests-var-expansion-tc88w" to be "success or failure" Feb 11 11:46:34.170: INFO: Pod "var-expansion-2706a069-4cc4-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.245647ms Feb 11 11:46:36.186: INFO: Pod "var-expansion-2706a069-4cc4-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035457017s Feb 11 11:46:38.199: INFO: Pod "var-expansion-2706a069-4cc4-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048236467s Feb 11 11:46:40.210: INFO: Pod "var-expansion-2706a069-4cc4-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058725675s Feb 11 11:46:42.234: INFO: Pod "var-expansion-2706a069-4cc4-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082931516s Feb 11 11:46:44.311: INFO: Pod "var-expansion-2706a069-4cc4-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.159834742s STEP: Saw pod success Feb 11 11:46:44.311: INFO: Pod "var-expansion-2706a069-4cc4-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:46:44.332: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-2706a069-4cc4-11ea-a6e3-0242ac110005 container dapi-container: STEP: delete the pod Feb 11 11:46:44.692: INFO: Waiting for pod var-expansion-2706a069-4cc4-11ea-a6e3-0242ac110005 to disappear Feb 11 11:46:44.706: INFO: Pod var-expansion-2706a069-4cc4-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:46:44.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-tc88w" for this suite. Feb 11 11:46:50.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:46:50.953: INFO: namespace: e2e-tests-var-expansion-tc88w, resource: bindings, ignored listing per whitelist Feb 11 11:46:51.018: INFO: namespace e2e-tests-var-expansion-tc88w deletion completed in 6.290892924s • [SLOW TEST:17.278 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:46:51.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-dt5hq [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-dt5hq STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-dt5hq Feb 11 11:46:51.422: INFO: Found 0 stateful pods, waiting for 1 Feb 11 11:47:01.442: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 11 11:47:01.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 11 11:47:02.086: INFO: stderr: "I0211 11:47:01.692564 1540 log.go:172] (0xc0008962c0) (0xc000776640) Create stream\nI0211 11:47:01.692989 1540 log.go:172] (0xc0008962c0) (0xc000776640) Stream added, broadcasting: 1\nI0211 11:47:01.701683 1540 log.go:172] (0xc0008962c0) Reply frame received for 1\nI0211 11:47:01.701740 1540 log.go:172] (0xc0008962c0) (0xc0005c6c80) Create stream\nI0211 11:47:01.701753 1540 log.go:172] (0xc0008962c0) (0xc0005c6c80) Stream added, broadcasting: 3\nI0211 11:47:01.703563 1540 log.go:172] (0xc0008962c0) Reply frame received for 3\nI0211 11:47:01.703592 1540 log.go:172] (0xc0008962c0) (0xc0005c6dc0) Create stream\nI0211 11:47:01.703600 1540 log.go:172] (0xc0008962c0) (0xc0005c6dc0) Stream added, broadcasting: 5\nI0211 11:47:01.705466 1540 log.go:172] (0xc0008962c0) Reply frame received for 5\nI0211 11:47:01.890705 1540 log.go:172] (0xc0008962c0) Data frame received for 3\nI0211 11:47:01.890822 1540 log.go:172] (0xc0005c6c80) (3) Data frame handling\nI0211 11:47:01.890858 1540 log.go:172] (0xc0005c6c80) (3) Data frame sent\nI0211 11:47:02.066080 1540 log.go:172] (0xc0008962c0) Data frame received for 1\nI0211 11:47:02.066288 1540 log.go:172] (0xc0008962c0) (0xc0005c6dc0) Stream removed, broadcasting: 5\nI0211 11:47:02.066365 1540 log.go:172] (0xc000776640) (1) Data frame handling\nI0211 11:47:02.066401 1540 log.go:172] (0xc000776640) (1) Data frame sent\nI0211 11:47:02.066452 1540 log.go:172] (0xc0008962c0) (0xc0005c6c80) Stream removed, broadcasting: 3\nI0211 11:47:02.066660 1540 log.go:172] (0xc0008962c0) (0xc000776640) Stream removed, broadcasting: 1\nI0211 11:47:02.066696 1540 log.go:172] (0xc0008962c0) Go away received\nI0211 11:47:02.067616 1540 log.go:172] (0xc0008962c0) (0xc000776640) Stream removed, broadcasting: 1\nI0211 11:47:02.067650 1540 log.go:172] (0xc0008962c0) (0xc0005c6c80) Stream removed, broadcasting: 3\nI0211 11:47:02.067668 1540 log.go:172] (0xc0008962c0) (0xc0005c6dc0) Stream removed, broadcasting: 5\n" Feb 11 11:47:02.086: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 11 11:47:02.086: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 11 11:47:02.101: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 11 11:47:12.121: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 11 11:47:12.121: INFO: Waiting for statefulset status.replicas updated to 0 Feb 11 11:47:12.182: INFO: POD NODE PHASE GRACE CONDITIONS Feb 11 11:47:12.183: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC }] Feb 11 11:47:12.183: INFO: ss-1 Pending [] Feb 11 11:47:12.183: INFO: Feb 11 11:47:12.183: INFO: StatefulSet ss has not reached scale 3, at 2 Feb 11 11:47:14.453: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976261394s Feb 11 11:47:15.639: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.706047729s Feb 11 11:47:16.651: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.520366885s Feb 11 11:47:17.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.508414998s Feb 11 11:47:19.503: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.494887503s Feb 11 11:47:20.532: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.655772673s Feb 11 11:47:21.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 626.75484ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-dt5hq Feb 11 11:47:22.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:47:23.664: INFO: stderr: "I0211 11:47:22.914995 1563 log.go:172] (0xc0001386e0) (0xc000716640) Create stream\nI0211 11:47:22.915243 1563 log.go:172] (0xc0001386e0) (0xc000716640) Stream added, broadcasting: 1\nI0211 11:47:22.921204 1563 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0211 11:47:22.921281 1563 log.go:172] (0xc0001386e0) (0xc0005a0e60) Create stream\nI0211 11:47:22.921291 1563 log.go:172] (0xc0001386e0) (0xc0005a0e60) Stream added, broadcasting: 3\nI0211 11:47:22.922453 1563 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0211 11:47:22.922473 1563 log.go:172] (0xc0001386e0) (0xc00065e000) Create stream\nI0211 11:47:22.922480 1563 log.go:172] (0xc0001386e0) (0xc00065e000) Stream added, broadcasting: 5\nI0211 11:47:22.923422 1563 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0211 11:47:23.226592 1563 log.go:172] (0xc0001386e0) Data frame received for 3\nI0211 11:47:23.226672 1563 log.go:172] (0xc0005a0e60) (3) Data frame handling\nI0211 11:47:23.226740 1563 log.go:172] (0xc0005a0e60) (3) Data frame sent\nI0211 11:47:23.651235 1563 log.go:172] (0xc0001386e0) (0xc0005a0e60) Stream removed, broadcasting: 3\nI0211 11:47:23.651347 1563 log.go:172] (0xc0001386e0) Data frame received for 1\nI0211 11:47:23.651374 1563 log.go:172] (0xc000716640) (1) Data frame handling\nI0211 11:47:23.651389 1563 log.go:172] (0xc000716640) (1) Data frame sent\nI0211 11:47:23.651402 1563 log.go:172] (0xc0001386e0) (0xc000716640) Stream removed, broadcasting: 1\nI0211 11:47:23.651972 1563 log.go:172] (0xc0001386e0) (0xc00065e000) Stream removed, broadcasting: 5\nI0211 11:47:23.652022 1563 log.go:172] (0xc0001386e0) (0xc000716640) Stream removed, broadcasting: 1\nI0211 11:47:23.652033 1563 log.go:172] (0xc0001386e0) (0xc0005a0e60) Stream removed, broadcasting: 3\nI0211 11:47:23.652050 1563 log.go:172] (0xc0001386e0) (0xc00065e000) Stream removed, broadcasting: 5\n" Feb 11 11:47:23.664: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 11 11:47:23.664: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 11 11:47:23.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:47:24.353: INFO: stderr: "I0211 11:47:23.841590 1585 log.go:172] (0xc0007062c0) (0xc0006792c0) Create stream\nI0211 11:47:23.841844 1585 log.go:172] (0xc0007062c0) (0xc0006792c0) Stream added, broadcasting: 1\nI0211 11:47:23.866986 1585 log.go:172] (0xc0007062c0) Reply frame received for 1\nI0211 11:47:23.867135 1585 log.go:172] (0xc0007062c0) (0xc000726000) Create stream\nI0211 11:47:23.867162 1585 log.go:172] (0xc0007062c0) (0xc000726000) Stream added, broadcasting: 3\nI0211 11:47:23.874788 1585 log.go:172] (0xc0007062c0) Reply frame received for 3\nI0211 11:47:23.874843 1585 log.go:172] (0xc0007062c0) (0xc0000dc000) Create stream\nI0211 11:47:23.874864 1585 log.go:172] (0xc0007062c0) (0xc0000dc000) Stream added, broadcasting: 5\nI0211 11:47:23.880121 1585 log.go:172] (0xc0007062c0) Reply frame received for 5\nI0211 11:47:24.195230 1585 log.go:172] (0xc0007062c0) Data frame received for 3\nI0211 11:47:24.195649 1585 log.go:172] (0xc000726000) (3) Data frame handling\nI0211 11:47:24.195667 1585 log.go:172] (0xc000726000) (3) Data frame sent\nI0211 11:47:24.195751 1585 log.go:172] (0xc0007062c0) Data frame received for 5\nI0211 11:47:24.195760 1585 log.go:172] (0xc0000dc000) (5) Data frame handling\nI0211 11:47:24.195776 1585 log.go:172] (0xc0000dc000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0211 11:47:24.341864 1585 log.go:172] (0xc0007062c0) Data frame received for 1\nI0211 11:47:24.342145 1585 log.go:172] (0xc0006792c0) (1) Data frame handling\nI0211 11:47:24.342186 1585 log.go:172] (0xc0006792c0) (1) Data frame sent\nI0211 11:47:24.342279 1585 log.go:172] (0xc0007062c0) (0xc000726000) Stream removed, broadcasting: 3\nI0211 11:47:24.342363 1585 log.go:172] (0xc0007062c0) (0xc0006792c0) Stream removed, broadcasting: 1\nI0211 11:47:24.342996 1585 log.go:172] (0xc0007062c0) (0xc0000dc000) Stream removed, broadcasting: 5\nI0211 11:47:24.343075 1585 log.go:172] (0xc0007062c0) (0xc0006792c0) Stream removed, broadcasting: 1\nI0211 11:47:24.343105 1585 log.go:172] (0xc0007062c0) (0xc000726000) Stream removed, broadcasting: 3\nI0211 11:47:24.343143 1585 log.go:172] (0xc0007062c0) (0xc0000dc000) Stream removed, broadcasting: 5\n" Feb 11 11:47:24.353: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 11 11:47:24.353: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 11 11:47:24.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:47:24.806: INFO: stderr: "I0211 11:47:24.578063 1608 log.go:172] (0xc0008642c0) (0xc000740640) Create stream\nI0211 11:47:24.578252 1608 log.go:172] (0xc0008642c0) (0xc000740640) Stream added, broadcasting: 1\nI0211 11:47:24.584175 1608 log.go:172] (0xc0008642c0) Reply frame received for 1\nI0211 11:47:24.584223 1608 log.go:172] (0xc0008642c0) (0xc000660be0) Create stream\nI0211 11:47:24.584233 1608 log.go:172] (0xc0008642c0) (0xc000660be0) Stream added, broadcasting: 3\nI0211 11:47:24.587080 1608 log.go:172] (0xc0008642c0) Reply frame received for 3\nI0211 11:47:24.587106 1608 log.go:172] (0xc0008642c0) (0xc000696000) Create stream\nI0211 11:47:24.587118 1608 log.go:172] (0xc0008642c0) (0xc000696000) Stream added, broadcasting: 5\nI0211 11:47:24.589171 1608 log.go:172] (0xc0008642c0) Reply frame received for 5\nI0211 11:47:24.683099 1608 log.go:172] (0xc0008642c0) Data frame received for 3\nI0211 11:47:24.683137 1608 log.go:172] (0xc000660be0) (3) Data frame handling\nI0211 11:47:24.683150 1608 log.go:172] (0xc000660be0) (3) Data frame sent\nI0211 11:47:24.683178 1608 log.go:172] (0xc0008642c0) Data frame received for 5\nI0211 11:47:24.683187 1608 log.go:172] (0xc000696000) (5) Data frame handling\nI0211 11:47:24.683196 1608 log.go:172] (0xc000696000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0211 11:47:24.795816 1608 log.go:172] (0xc0008642c0) Data frame received for 1\nI0211 11:47:24.795893 1608 log.go:172] (0xc0008642c0) (0xc000696000) Stream removed, broadcasting: 5\nI0211 11:47:24.795967 1608 log.go:172] (0xc000740640) (1) Data frame handling\nI0211 11:47:24.795994 1608 log.go:172] (0xc000740640) (1) Data frame sent\nI0211 11:47:24.796045 1608 log.go:172] (0xc0008642c0) (0xc000660be0) Stream removed, broadcasting: 3\nI0211 11:47:24.796082 1608 log.go:172] (0xc0008642c0) (0xc000740640) Stream removed, broadcasting: 1\nI0211 11:47:24.796105 1608 log.go:172] (0xc0008642c0) Go away received\nI0211 11:47:24.796357 1608 log.go:172] (0xc0008642c0) (0xc000740640) Stream removed, broadcasting: 1\nI0211 11:47:24.796376 1608 log.go:172] (0xc0008642c0) (0xc000660be0) Stream removed, broadcasting: 3\nI0211 11:47:24.796385 1608 log.go:172] (0xc0008642c0) (0xc000696000) Stream removed, broadcasting: 5\n" Feb 11 11:47:24.807: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 11 11:47:24.807: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 11 11:47:24.824: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 11 11:47:24.824: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 11 11:47:24.824: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 11 11:47:24.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 11 11:47:25.305: INFO: stderr: "I0211 11:47:25.026864 1630 log.go:172] (0xc000726370) (0xc000752640) Create stream\nI0211 11:47:25.026977 1630 log.go:172] (0xc000726370) (0xc000752640) Stream added, broadcasting: 1\nI0211 11:47:25.030824 1630 log.go:172] (0xc000726370) Reply frame received for 1\nI0211 11:47:25.030895 1630 log.go:172] (0xc000726370) (0xc000678dc0) Create stream\nI0211 11:47:25.030913 1630 log.go:172] (0xc000726370) (0xc000678dc0) Stream added, broadcasting: 3\nI0211 11:47:25.032014 1630 log.go:172] (0xc000726370) Reply frame received for 3\nI0211 11:47:25.032032 1630 log.go:172] (0xc000726370) (0xc000678f00) Create stream\nI0211 11:47:25.032038 1630 log.go:172] (0xc000726370) (0xc000678f00) Stream added, broadcasting: 5\nI0211 11:47:25.032952 1630 log.go:172] (0xc000726370) Reply frame received for 5\nI0211 11:47:25.147240 1630 log.go:172] (0xc000726370) Data frame received for 3\nI0211 11:47:25.147277 1630 log.go:172] (0xc000678dc0) (3) Data frame handling\nI0211 11:47:25.147297 1630 log.go:172] (0xc000678dc0) (3) Data frame sent\nI0211 11:47:25.296369 1630 log.go:172] (0xc000726370) (0xc000678dc0) Stream removed, broadcasting: 3\nI0211 11:47:25.296519 1630 log.go:172] (0xc000726370) Data frame received for 1\nI0211 11:47:25.296553 1630 log.go:172] (0xc000752640) (1) Data frame handling\nI0211 11:47:25.296573 1630 log.go:172] (0xc000752640) (1) Data frame sent\nI0211 11:47:25.296605 1630 log.go:172] (0xc000726370) (0xc000752640) Stream removed, broadcasting: 1\nI0211 11:47:25.296655 1630 log.go:172] (0xc000726370) (0xc000678f00) Stream removed, broadcasting: 5\nI0211 11:47:25.296756 1630 log.go:172] (0xc000726370) Go away received\nI0211 11:47:25.296935 1630 log.go:172] (0xc000726370) (0xc000752640) Stream removed, broadcasting: 1\nI0211 11:47:25.296975 1630 log.go:172] (0xc000726370) (0xc000678dc0) Stream removed, broadcasting: 3\nI0211 11:47:25.296981 1630 log.go:172] (0xc000726370) (0xc000678f00) Stream removed, broadcasting: 5\n" Feb 11 11:47:25.305: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 11 11:47:25.305: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 11 11:47:25.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 11 11:47:25.666: INFO: stderr: "I0211 11:47:25.438096 1651 log.go:172] (0xc0006f0370) (0xc00070c640) Create stream\nI0211 11:47:25.438224 1651 log.go:172] (0xc0006f0370) (0xc00070c640) Stream added, broadcasting: 1\nI0211 11:47:25.441612 1651 log.go:172] (0xc0006f0370) Reply frame received for 1\nI0211 11:47:25.441639 1651 log.go:172] (0xc0006f0370) (0xc00070c6e0) Create stream\nI0211 11:47:25.441644 1651 log.go:172] (0xc0006f0370) (0xc00070c6e0) Stream added, broadcasting: 3\nI0211 11:47:25.442350 1651 log.go:172] (0xc0006f0370) Reply frame received for 3\nI0211 11:47:25.442370 1651 log.go:172] (0xc0006f0370) (0xc0005c0c80) Create stream\nI0211 11:47:25.442376 1651 log.go:172] (0xc0006f0370) (0xc0005c0c80) Stream added, broadcasting: 5\nI0211 11:47:25.443124 1651 log.go:172] (0xc0006f0370) Reply frame received for 5\nI0211 11:47:25.562037 1651 log.go:172] (0xc0006f0370) Data frame received for 3\nI0211 11:47:25.562080 1651 log.go:172] (0xc00070c6e0) (3) Data frame handling\nI0211 11:47:25.562094 1651 log.go:172] (0xc00070c6e0) (3) Data frame sent\nI0211 11:47:25.655421 1651 log.go:172] (0xc0006f0370) (0xc0005c0c80) Stream removed, broadcasting: 5\nI0211 11:47:25.655541 1651 log.go:172] (0xc0006f0370) Data frame received for 1\nI0211 11:47:25.655587 1651 log.go:172] (0xc0006f0370) (0xc00070c6e0) Stream removed, broadcasting: 3\nI0211 11:47:25.655627 1651 log.go:172] (0xc00070c640) (1) Data frame handling\nI0211 11:47:25.655646 1651 log.go:172] (0xc00070c640) (1) Data frame sent\nI0211 11:47:25.655652 1651 log.go:172] (0xc0006f0370) (0xc00070c640) Stream removed, broadcasting: 1\nI0211 11:47:25.655667 1651 log.go:172] (0xc0006f0370) Go away received\nI0211 11:47:25.656157 1651 log.go:172] (0xc0006f0370) (0xc00070c640) Stream removed, broadcasting: 1\nI0211 11:47:25.656183 1651 log.go:172] (0xc0006f0370) (0xc00070c6e0) Stream removed, broadcasting: 3\nI0211 11:47:25.656203 1651 log.go:172] (0xc0006f0370) (0xc0005c0c80) Stream removed, broadcasting: 5\n" Feb 11 11:47:25.667: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 11 11:47:25.667: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 11 11:47:25.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 11 11:47:26.296: INFO: stderr: "I0211 11:47:25.838781 1672 log.go:172] (0xc00013a6e0) (0xc000740640) Create stream\nI0211 11:47:25.838932 1672 log.go:172] (0xc00013a6e0) (0xc000740640) Stream added, broadcasting: 1\nI0211 11:47:25.842288 1672 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0211 11:47:25.842315 1672 log.go:172] (0xc00013a6e0) (0xc00066ad20) Create stream\nI0211 11:47:25.842323 1672 log.go:172] (0xc00013a6e0) (0xc00066ad20) Stream added, broadcasting: 3\nI0211 11:47:25.843470 1672 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0211 11:47:25.843500 1672 log.go:172] (0xc00013a6e0) (0xc000376000) Create stream\nI0211 11:47:25.843510 1672 log.go:172] (0xc00013a6e0) (0xc000376000) Stream added, broadcasting: 5\nI0211 11:47:25.844536 1672 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0211 11:47:26.043274 1672 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0211 11:47:26.043333 1672 log.go:172] (0xc00066ad20) (3) Data frame handling\nI0211 11:47:26.043364 1672 log.go:172] (0xc00066ad20) (3) Data frame sent\nI0211 11:47:26.284806 1672 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0211 11:47:26.284834 1672 log.go:172] (0xc000740640) (1) Data frame handling\nI0211 11:47:26.284853 1672 log.go:172] (0xc000740640) (1) Data frame sent\nI0211 11:47:26.284867 1672 log.go:172] (0xc00013a6e0) (0xc000740640) Stream removed, broadcasting: 1\nI0211 11:47:26.285135 1672 log.go:172] (0xc00013a6e0) (0xc00066ad20) Stream removed, broadcasting: 3\nI0211 11:47:26.285975 1672 log.go:172] (0xc00013a6e0) (0xc000376000) Stream removed, broadcasting: 5\nI0211 11:47:26.286009 1672 log.go:172] (0xc00013a6e0) (0xc000740640) Stream removed, broadcasting: 1\nI0211 11:47:26.286020 1672 log.go:172] (0xc00013a6e0) (0xc00066ad20) Stream removed, broadcasting: 3\nI0211 11:47:26.286026 1672 log.go:172] (0xc00013a6e0) (0xc000376000) Stream removed, broadcasting: 5\n" Feb 11 11:47:26.296: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 11 11:47:26.296: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 11 11:47:26.296: INFO: Waiting for statefulset status.replicas updated to 0 Feb 11 11:47:28.056: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 11 11:47:38.140: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 11 11:47:38.140: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 11 11:47:38.140: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 11 11:47:38.192: INFO: POD NODE PHASE GRACE CONDITIONS Feb 11 11:47:38.192: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC }] Feb 11 11:47:38.192: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:38.192: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:38.192: INFO: Feb 11 11:47:38.192: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 11 11:47:40.710: INFO: POD NODE PHASE GRACE CONDITIONS Feb 11 11:47:40.710: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC }] Feb 11 11:47:40.711: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:40.711: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:40.711: INFO: Feb 11 11:47:40.711: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 11 11:47:41.730: INFO: POD NODE PHASE GRACE CONDITIONS Feb 11 11:47:41.730: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC }] Feb 11 11:47:41.730: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:41.730: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:41.730: INFO: Feb 11 11:47:41.730: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 11 11:47:42.756: INFO: POD NODE PHASE GRACE CONDITIONS Feb 11 11:47:42.756: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC }] Feb 11 11:47:42.756: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:42.756: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:42.756: INFO: Feb 11 11:47:42.756: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 11 11:47:43.794: INFO: POD NODE PHASE GRACE CONDITIONS Feb 11 11:47:43.794: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC }] Feb 11 11:47:43.794: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:43.794: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:43.794: INFO: Feb 11 11:47:43.794: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 11 11:47:44.969: INFO: POD NODE PHASE GRACE CONDITIONS Feb 11 11:47:44.969: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC }] Feb 11 11:47:44.969: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:44.969: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:44.969: INFO: Feb 11 11:47:44.969: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 11 11:47:45.987: INFO: POD NODE PHASE GRACE CONDITIONS Feb 11 11:47:45.987: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC }] Feb 11 11:47:45.988: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:45.988: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:45.988: INFO: Feb 11 11:47:45.988: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 11 11:47:47.013: INFO: POD NODE PHASE GRACE CONDITIONS Feb 11 11:47:47.014: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC }] Feb 11 11:47:47.014: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:47.014: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:47.014: INFO: Feb 11 11:47:47.014: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 11 11:47:48.038: INFO: POD NODE PHASE GRACE CONDITIONS Feb 11 11:47:48.038: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:46:51 +0000 UTC }] Feb 11 11:47:48.039: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:48.039: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 11:47:12 +0000 UTC }] Feb 11 11:47:48.039: INFO: Feb 11 11:47:48.039: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-dt5hq Feb 11 11:47:49.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:47:49.230: INFO: rc: 1 Feb 11 11:47:49.231: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0018e6360 exit status 1 true [0xc001972620 0xc001972638 0xc001972650] [0xc001972620 0xc001972638 0xc001972650] [0xc001972630 0xc001972648] [0x935700 0x935700] 0xc0019ed5c0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 11 11:47:59.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:47:59.383: INFO: rc: 1 Feb 11 11:47:59.384: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011395c0 exit status 1 true [0xc000e516a0 0xc000e516c0 0xc000e516d8] [0xc000e516a0 0xc000e516c0 0xc000e516d8] [0xc000e516b8 0xc000e516d0] [0x935700 0x935700] 0xc001879b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:48:09.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:48:09.595: INFO: rc: 1 Feb 11 11:48:09.595: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002606630 exit status 1 true [0xc000e50020 0xc000e50080 0xc000e501d8] [0xc000e50020 0xc000e50080 0xc000e501d8] [0xc000e50058 0xc000e50138] [0x935700 0x935700] 0xc0026d61e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:48:19.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:48:19.845: INFO: rc: 1 Feb 11 11:48:19.845: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00199c120 exit status 1 true [0xc001db4000 0xc001db4018 0xc001db4038] [0xc001db4000 0xc001db4018 0xc001db4038] [0xc001db4010 0xc001db4028] [0x935700 0x935700] 0xc0023b4660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:48:29.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:48:30.043: INFO: rc: 1 Feb 11 11:48:30.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a34150 exit status 1 true [0xc0026ea000 0xc0026ea018 0xc0026ea030] [0xc0026ea000 0xc0026ea018 0xc0026ea030] [0xc0026ea010 0xc0026ea028] [0x935700 0x935700] 0xc00204e240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:48:40.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:48:40.189: INFO: rc: 1 Feb 11 11:48:40.190: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00199c240 exit status 1 true [0xc001db4040 0xc001db4060 0xc001db4078] [0xc001db4040 0xc001db4060 0xc001db4078] [0xc001db4058 0xc001db4070] [0x935700 0x935700] 0xc0023b4900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:48:50.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:48:50.360: INFO: rc: 1 Feb 11 11:48:50.360: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00199c360 exit status 1 true [0xc001db4080 0xc001db4098 0xc001db40b0] [0xc001db4080 0xc001db4098 0xc001db40b0] [0xc001db4090 0xc001db40a8] [0x935700 0x935700] 0xc0023b4ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:49:00.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:49:00.597: INFO: rc: 1 Feb 11 11:49:00.598: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000423050 exit status 1 true [0xc00000e010 0xc00000e1b0 0xc00000e248] [0xc00000e010 0xc00000e1b0 0xc00000e248] [0xc00000e120 0xc00000e210] [0x935700 0x935700] 0xc001c68600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:49:10.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:49:10.748: INFO: rc: 1 Feb 11 11:49:10.749: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a34360 exit status 1 true [0xc0026ea038 0xc0026ea050 0xc0026ea068] [0xc0026ea038 0xc0026ea050 0xc0026ea068] [0xc0026ea048 0xc0026ea060] [0x935700 0x935700] 0xc00204f260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:49:20.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:49:20.915: INFO: rc: 1 Feb 11 11:49:20.916: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a34480 exit status 1 true [0xc0026ea070 0xc0026ea088 0xc0026ea0a0] [0xc0026ea070 0xc0026ea088 0xc0026ea0a0] [0xc0026ea080 0xc0026ea098] [0x935700 0x935700] 0xc00204f7a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:49:30.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:49:31.081: INFO: rc: 1 Feb 11 11:49:31.082: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a34600 exit status 1 true [0xc0026ea0a8 0xc0026ea0c0 0xc0026ea0d8] [0xc0026ea0a8 0xc0026ea0c0 0xc0026ea0d8] [0xc0026ea0b8 0xc0026ea0d0] [0x935700 0x935700] 0xc00204fda0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:49:41.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:49:41.247: INFO: rc: 1 Feb 11 11:49:41.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00199c570 exit status 1 true [0xc001db40b8 0xc001db40d0 0xc001db40e8] [0xc001db40b8 0xc001db40d0 0xc001db40e8] [0xc001db40c8 0xc001db40e0] [0x935700 0x935700] 0xc0023b4e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:49:51.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:49:51.509: INFO: rc: 1 Feb 11 11:49:51.510: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a34780 exit status 1 true [0xc0026ea0e0 0xc0026ea0f8 0xc0026ea110] [0xc0026ea0e0 0xc0026ea0f8 0xc0026ea110] [0xc0026ea0f0 0xc0026ea108] [0x935700 0x935700] 0xc0019c2060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:50:01.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:50:01.728: INFO: rc: 1 Feb 11 11:50:01.729: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a34b40 exit status 1 true [0xc0026ea118 0xc0026ea130 0xc0026ea148] [0xc0026ea118 0xc0026ea130 0xc0026ea148] [0xc0026ea128 0xc0026ea140] [0x935700 0x935700] 0xc0019c2660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:50:11.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:50:11.927: INFO: rc: 1 Feb 11 11:50:11.928: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002606660 exit status 1 true [0xc000e50020 0xc000e50080 0xc000e501d8] [0xc000e50020 0xc000e50080 0xc000e501d8] [0xc000e50058 0xc000e50138] [0x935700 0x935700] 0xc00204e240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:50:21.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:50:22.073: INFO: rc: 1 Feb 11 11:50:22.073: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026067b0 exit status 1 true [0xc000e50200 0xc000e50310 0xc000e503f0] [0xc000e50200 0xc000e50310 0xc000e503f0] [0xc000e502e0 0xc000e503e0] [0x935700 0x935700] 0xc00204f260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:50:32.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:50:32.276: INFO: rc: 1 Feb 11 11:50:32.276: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a341e0 exit status 1 true [0xc0026ea000 0xc0026ea018 0xc0026ea030] [0xc0026ea000 0xc0026ea018 0xc0026ea030] [0xc0026ea010 0xc0026ea028] [0x935700 0x935700] 0xc0026d61e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:50:42.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:50:42.414: INFO: rc: 1 Feb 11 11:50:42.415: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a34330 exit status 1 true [0xc0026ea038 0xc0026ea050 0xc0026ea068] [0xc0026ea038 0xc0026ea050 0xc0026ea068] [0xc0026ea048 0xc0026ea060] [0x935700 0x935700] 0xc0026d6d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:50:52.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:50:52.541: INFO: rc: 1 Feb 11 11:50:52.542: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00199c150 exit status 1 true [0xc001db4000 0xc001db4018 0xc001db4038] [0xc001db4000 0xc001db4018 0xc001db4038] [0xc001db4010 0xc001db4028] [0x935700 0x935700] 0xc0019c2360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:51:02.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:51:02.766: INFO: rc: 1 Feb 11 11:51:02.766: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a34510 exit status 1 true [0xc0026ea070 0xc0026ea088 0xc0026ea0a0] [0xc0026ea070 0xc0026ea088 0xc0026ea0a0] [0xc0026ea080 0xc0026ea098] [0x935700 0x935700] 0xc0026d7080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:51:12.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:51:12.943: INFO: rc: 1 Feb 11 11:51:12.943: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000423080 exit status 1 true [0xc00000e010 0xc00000e1b0 0xc00000e248] [0xc00000e010 0xc00000e1b0 0xc00000e248] [0xc00000e120 0xc00000e210] [0x935700 0x935700] 0xc0023b4660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:51:22.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:51:23.087: INFO: rc: 1 Feb 11 11:51:23.087: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002606b40 exit status 1 true [0xc000e50408 0xc000e50510 0xc000e505d8] [0xc000e50408 0xc000e50510 0xc000e505d8] [0xc000e504c0 0xc000e505c0] [0x935700 0x935700] 0xc00204f7a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:51:33.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:51:33.249: INFO: rc: 1 Feb 11 11:51:33.250: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00199c300 exit status 1 true [0xc001db4040 0xc001db4060 0xc001db4078] [0xc001db4040 0xc001db4060 0xc001db4078] [0xc001db4058 0xc001db4070] [0x935700 0x935700] 0xc0019c2ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:51:43.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:51:43.411: INFO: rc: 1 Feb 11 11:51:43.411: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002606c60 exit status 1 true [0xc000e505e0 0xc000e50618 0xc000e50648] [0xc000e505e0 0xc000e50618 0xc000e50648] [0xc000e505f8 0xc000e50640] [0x935700 0x935700] 0xc00204fda0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:51:53.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:51:53.579: INFO: rc: 1 Feb 11 11:51:53.579: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00199c540 exit status 1 true [0xc001db4080 0xc001db4098 0xc001db40b0] [0xc001db4080 0xc001db4098 0xc001db40b0] [0xc001db4090 0xc001db40a8] [0x935700 0x935700] 0xc0019c2ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:52:03.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:52:03.782: INFO: rc: 1 Feb 11 11:52:03.782: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004231d0 exit status 1 true [0xc00000e250 0xc00000e338 0xc00000e3a8] [0xc00000e250 0xc00000e338 0xc00000e3a8] [0xc00000e318 0xc00000e358] [0x935700 0x935700] 0xc0023b4900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:52:13.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:52:14.428: INFO: rc: 1 Feb 11 11:52:14.429: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a346c0 exit status 1 true [0xc0026ea0a8 0xc0026ea0c0 0xc0026ea0d8] [0xc0026ea0a8 0xc0026ea0c0 0xc0026ea0d8] [0xc0026ea0b8 0xc0026ea0d0] [0x935700 0x935700] 0xc0026d7500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:52:24.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:52:24.648: INFO: rc: 1 Feb 11 11:52:24.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000423050 exit status 1 true [0xc00000e010 0xc00000e1b0 0xc00000e248] [0xc00000e010 0xc00000e1b0 0xc00000e248] [0xc00000e120 0xc00000e210] [0x935700 0x935700] 0xc0019c2360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:52:34.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:52:34.833: INFO: rc: 1 Feb 11 11:52:34.833: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002606630 exit status 1 true [0xc000e50000 0xc000e50058 0xc000e50138] [0xc000e50000 0xc000e50058 0xc000e50138] [0xc000e50028 0xc000e50108] [0x935700 0x935700] 0xc0023b4660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:52:44.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:52:44.975: INFO: rc: 1 Feb 11 11:52:44.976: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000423200 exit status 1 true [0xc00000e250 0xc00000e338 0xc00000e3a8] [0xc00000e250 0xc00000e338 0xc00000e3a8] [0xc00000e318 0xc00000e358] [0x935700 0x935700] 0xc0019c2ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 11 11:52:54.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 11:52:55.118: INFO: rc: 1 Feb 11 11:52:55.118: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 11 11:52:55.118: INFO: Scaling statefulset ss to 0 Feb 11 11:52:55.138: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 11 11:52:55.140: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dt5hq Feb 11 11:52:55.143: INFO: Scaling statefulset ss to 0 Feb 11 11:52:55.152: INFO: Waiting for statefulset status.replicas updated to 0 Feb 11 11:52:55.154: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:52:55.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-dt5hq" for this suite. Feb 11 11:53:03.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:53:03.407: INFO: namespace: e2e-tests-statefulset-dt5hq, resource: bindings, ignored listing per whitelist Feb 11 11:53:03.487: INFO: namespace e2e-tests-statefulset-dt5hq deletion completed in 8.177227741s • [SLOW TEST:372.469 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:53:03.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-0f4e9f56-4cc5-11ea-a6e3-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-0f4ea097-4cc5-11ea-a6e3-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0f4e9f56-4cc5-11ea-a6e3-0242ac110005 STEP: Updating configmap cm-test-opt-upd-0f4ea097-4cc5-11ea-a6e3-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-0f4ea0cb-4cc5-11ea-a6e3-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:53:18.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xvcc9" for this suite. Feb 11 11:53:46.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:53:46.485: INFO: namespace: e2e-tests-configmap-xvcc9, resource: bindings, ignored listing per whitelist Feb 11 11:53:46.638: INFO: namespace e2e-tests-configmap-xvcc9 deletion completed in 28.545599654s • [SLOW TEST:43.149 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:53:46.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 11 11:53:46.846: INFO: Waiting up to 5m0s for pod "pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-rpknh" to be "success or failure" Feb 11 11:53:46.932: INFO: Pod "pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.28065ms Feb 11 11:53:49.322: INFO: Pod "pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47492187s Feb 11 11:53:51.336: INFO: Pod "pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489521803s Feb 11 11:53:53.659: INFO: Pod "pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.812329303s Feb 11 11:53:55.678: INFO: Pod "pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.831615747s Feb 11 11:53:57.696: INFO: Pod "pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.849434376s Feb 11 11:53:59.712: INFO: Pod "pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.864864912s STEP: Saw pod success Feb 11 11:53:59.712: INFO: Pod "pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 11:53:59.720: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005 container test-container: STEP: delete the pod Feb 11 11:54:00.782: INFO: Waiting for pod pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005 to disappear Feb 11 11:54:00.923: INFO: Pod pod-28fec0fc-4cc5-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:54:00.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rpknh" for this suite. Feb 11 11:54:07.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:54:07.167: INFO: namespace: e2e-tests-emptydir-rpknh, resource: bindings, ignored listing per whitelist Feb 11 11:54:07.220: INFO: namespace e2e-tests-emptydir-rpknh deletion completed in 6.254482159s • [SLOW TEST:20.581 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:54:07.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-65dw STEP: Creating a pod to test atomic-volume-subpath Feb 11 11:54:07.493: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-65dw" in namespace "e2e-tests-subpath-fql9j" to be "success or failure" Feb 11 11:54:07.505: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Pending", Reason="", readiness=false. Elapsed: 11.53576ms Feb 11 11:54:10.178: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.683885238s Feb 11 11:54:12.195: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.701606622s Feb 11 11:54:15.027: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Pending", Reason="", readiness=false. Elapsed: 7.533026816s Feb 11 11:54:17.039: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Pending", Reason="", readiness=false. Elapsed: 9.545333197s Feb 11 11:54:19.062: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Pending", Reason="", readiness=false. Elapsed: 11.568304366s Feb 11 11:54:21.136: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Pending", Reason="", readiness=false. Elapsed: 13.64266668s Feb 11 11:54:23.155: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Pending", Reason="", readiness=false. Elapsed: 15.661091851s Feb 11 11:54:25.205: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Pending", Reason="", readiness=false. Elapsed: 17.711546741s Feb 11 11:54:27.224: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Running", Reason="", readiness=false. Elapsed: 19.729915583s Feb 11 11:54:29.248: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Running", Reason="", readiness=false. Elapsed: 21.754054874s Feb 11 11:54:31.265: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Running", Reason="", readiness=false. Elapsed: 23.770869933s Feb 11 11:54:33.282: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Running", Reason="", readiness=false. Elapsed: 25.78795048s Feb 11 11:54:35.301: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Running", Reason="", readiness=false. Elapsed: 27.806861898s Feb 11 11:54:37.334: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Running", Reason="", readiness=false. Elapsed: 29.840283574s Feb 11 11:54:39.362: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Running", Reason="", readiness=false. Elapsed: 31.868465745s Feb 11 11:54:41.377: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Running", Reason="", readiness=false. Elapsed: 33.883207912s Feb 11 11:54:43.393: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Running", Reason="", readiness=false. Elapsed: 35.899762481s Feb 11 11:54:45.407: INFO: Pod "pod-subpath-test-configmap-65dw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.913416962s STEP: Saw pod success Feb 11 11:54:45.407: INFO: Pod "pod-subpath-test-configmap-65dw" satisfied condition "success or failure" Feb 11 11:54:45.412: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-65dw container test-container-subpath-configmap-65dw: STEP: delete the pod Feb 11 11:54:46.633: INFO: Waiting for pod pod-subpath-test-configmap-65dw to disappear Feb 11 11:54:46.672: INFO: Pod pod-subpath-test-configmap-65dw no longer exists STEP: Deleting pod pod-subpath-test-configmap-65dw Feb 11 11:54:46.673: INFO: Deleting pod "pod-subpath-test-configmap-65dw" in namespace "e2e-tests-subpath-fql9j" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:54:46.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-fql9j" for this suite. Feb 11 11:54:52.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:54:53.026: INFO: namespace: e2e-tests-subpath-fql9j, resource: bindings, ignored listing per whitelist Feb 11 11:54:53.031: INFO: namespace e2e-tests-subpath-fql9j deletion completed in 6.346049415s • [SLOW TEST:45.811 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:54:53.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 11 11:55:03.884: INFO: Successfully updated pod "labelsupdate508919de-4cc5-11ea-a6e3-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 11:55:06.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-prnss" for this suite. Feb 11 11:55:30.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 11:55:30.246: INFO: namespace: e2e-tests-projected-prnss, resource: bindings, ignored listing per whitelist Feb 11 11:55:30.296: INFO: namespace e2e-tests-projected-prnss deletion completed in 24.245271027s • [SLOW TEST:37.264 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 11:55:30.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 11 11:55:32.155: INFO: Pod name wrapped-volume-race-67b65601-4cc5-11ea-a6e3-0242ac110005: Found 0 pods out of 5 Feb 11 11:55:37.187: INFO: Pod name wrapped-volume-race-67b65601-4cc5-11ea-a6e3-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-67b65601-4cc5-11ea-a6e3-0242ac110005 in namespace e2e-tests-emptydir-wrapper-gtb85, will wait for the garbage collector to delete the pods Feb 11 11:57:19.379: INFO: Deleting ReplicationController wrapped-volume-race-67b65601-4cc5-11ea-a6e3-0242ac110005 took: 30.131216ms Feb 11 11:57:19.781: INFO: Terminating ReplicationController wrapped-volume-race-67b65601-4cc5-11ea-a6e3-0242ac110005 pods took: 401.443851ms STEP: Creating RC which spawns configmap-volume pods Feb 11 11:58:13.370: INFO: Pod name wrapped-volume-race-c7c151da-4cc5-11ea-a6e3-0242ac110005: Found 0 pods out of 5 Feb 11 11:58:18.419: INFO: Pod name wrapped-volume-race-c7c151da-4cc5-11ea-a6e3-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c7c151da-4cc5-11ea-a6e3-0242ac110005 in namespace e2e-tests-emptydir-wrapper-gtb85, will wait for the garbage collector to delete the pods Feb 11 12:00:54.938: INFO: Deleting ReplicationController wrapped-volume-race-c7c151da-4cc5-11ea-a6e3-0242ac110005 took: 184.560014ms Feb 11 12:00:55.639: INFO: Terminating ReplicationController wrapped-volume-race-c7c151da-4cc5-11ea-a6e3-0242ac110005 pods took: 701.119713ms STEP: Creating RC which spawns configmap-volume pods Feb 11 12:01:43.899: INFO: Pod name wrapped-volume-race-4541d7d0-4cc6-11ea-a6e3-0242ac110005: Found 0 pods out of 5 Feb 11 12:01:48.921: INFO: Pod name wrapped-volume-race-4541d7d0-4cc6-11ea-a6e3-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4541d7d0-4cc6-11ea-a6e3-0242ac110005 in namespace e2e-tests-emptydir-wrapper-gtb85, will wait for the garbage collector to delete the pods Feb 11 12:04:23.166: INFO: Deleting ReplicationController wrapped-volume-race-4541d7d0-4cc6-11ea-a6e3-0242ac110005 took: 32.293026ms Feb 11 12:04:23.567: INFO: Terminating ReplicationController wrapped-volume-race-4541d7d0-4cc6-11ea-a6e3-0242ac110005 pods took: 400.736327ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 12:05:13.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-gtb85" for this suite. Feb 11 12:05:24.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:05:24.254: INFO: namespace: e2e-tests-emptydir-wrapper-gtb85, resource: bindings, ignored listing per whitelist Feb 11 12:05:24.341: INFO: namespace e2e-tests-emptydir-wrapper-gtb85 deletion completed in 10.356411816s • [SLOW TEST:594.045 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 12:05:24.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Feb 11 12:05:24.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 11 12:05:24.867: INFO: stderr: "" Feb 11 12:05:24.867: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 12:05:24.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-59st9" for this suite. Feb 11 12:05:30.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:05:31.005: INFO: namespace: e2e-tests-kubectl-59st9, resource: bindings, ignored listing per whitelist Feb 11 12:05:31.052: INFO: namespace e2e-tests-kubectl-59st9 deletion completed in 6.169852658s • [SLOW TEST:6.711 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 12:05:31.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 11 12:05:31.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5db29' Feb 11 12:05:35.748: INFO: stderr: "" Feb 11 12:05:35.748: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Feb 11 12:05:36.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5db29' Feb 11 12:05:37.029: INFO: stderr: "" Feb 11 12:05:37.030: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 12:05:37.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5db29" for this suite. Feb 11 12:05:45.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:05:45.295: INFO: namespace: e2e-tests-kubectl-5db29, resource: bindings, ignored listing per whitelist Feb 11 12:05:45.356: INFO: namespace e2e-tests-kubectl-5db29 deletion completed in 8.305461379s • [SLOW TEST:14.304 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 12:05:45.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 11 12:05:45.792: INFO: Waiting up to 5m0s for pod "downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-rwjjz" to be "success or failure" Feb 11 12:05:45.832: INFO: Pod "downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.507668ms Feb 11 12:05:47.847: INFO: Pod "downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054475314s Feb 11 12:05:49.868: INFO: Pod "downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075447725s Feb 11 12:05:52.006: INFO: Pod "downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213711291s Feb 11 12:05:54.036: INFO: Pod "downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.243687374s Feb 11 12:05:56.111: INFO: Pod "downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.318721846s Feb 11 12:05:58.126: INFO: Pod "downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.334115019s STEP: Saw pod success Feb 11 12:05:58.126: INFO: Pod "downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 12:05:58.133: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005 container dapi-container: STEP: delete the pod Feb 11 12:05:59.247: INFO: Waiting for pod downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005 to disappear Feb 11 12:05:59.260: INFO: Pod downward-api-d57f7594-4cc6-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 12:05:59.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rwjjz" for this suite. Feb 11 12:06:05.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:06:05.612: INFO: namespace: e2e-tests-downward-api-rwjjz, resource: bindings, ignored listing per whitelist Feb 11 12:06:05.627: INFO: namespace e2e-tests-downward-api-rwjjz deletion completed in 6.332799158s • [SLOW TEST:20.269 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 12:06:05.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 11 12:06:05.921: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 11 12:06:10.942: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 11 12:06:16.963: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 11 12:06:18.989: INFO: Creating deployment "test-rollover-deployment" Feb 11 12:06:19.025: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 11 12:06:21.142: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 11 12:06:21.155: INFO: Ensure that both replica sets have 1 created replica Feb 11 12:06:21.164: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 11 12:06:21.178: INFO: Updating deployment test-rollover-deployment Feb 11 12:06:21.178: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 11 12:06:23.758: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 11 12:06:23.779: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 11 12:06:23.797: INFO: all replica sets need to contain the pod-template-hash label Feb 11 12:06:23.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019583, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 12:06:25.920: INFO: all replica sets need to contain the pod-template-hash label Feb 11 12:06:25.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019583, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 12:06:27.826: INFO: all replica sets need to contain the pod-template-hash label Feb 11 12:06:27.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019583, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 12:06:30.923: INFO: all replica sets need to contain the pod-template-hash label Feb 11 12:06:30.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019583, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 12:06:31.841: INFO: all replica sets need to contain the pod-template-hash label Feb 11 12:06:31.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019583, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 12:06:33.887: INFO: all replica sets need to contain the pod-template-hash label Feb 11 12:06:33.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019583, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 12:06:35.820: INFO: all replica sets need to contain the pod-template-hash label Feb 11 12:06:35.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019593, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 12:06:37.820: INFO: all replica sets need to contain the pod-template-hash label Feb 11 12:06:37.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019593, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 12:06:39.844: INFO: all replica sets need to contain the pod-template-hash label Feb 11 12:06:39.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019593, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 12:06:41.839: INFO: all replica sets need to contain the pod-template-hash label Feb 11 12:06:41.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019593, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 12:06:44.226: INFO: all replica sets need to contain the pod-template-hash label Feb 11 12:06:44.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019593, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717019579, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 12:06:45.815: INFO: Feb 11 12:06:45.815: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 11 12:06:45.827: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-gnfdv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gnfdv/deployments/test-rollover-deployment,UID:e9520c7f-4cc6-11ea-a994-fa163e34d433,ResourceVersion:21306906,Generation:2,CreationTimestamp:2020-02-11 12:06:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-11 12:06:19 +0000 UTC 2020-02-11 12:06:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-11 12:06:45 +0000 UTC 2020-02-11 12:06:19 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 11 12:06:45.831: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-gnfdv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gnfdv/replicasets/test-rollover-deployment-5b8479fdb6,UID:eaa0946a-4cc6-11ea-a994-fa163e34d433,ResourceVersion:21306893,Generation:2,CreationTimestamp:2020-02-11 12:06:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e9520c7f-4cc6-11ea-a994-fa163e34d433 0xc00223f8a7 0xc00223f8a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 11 12:06:45.831: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 11 12:06:45.831: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-gnfdv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gnfdv/replicasets/test-rollover-controller,UID:e181ab89-4cc6-11ea-a994-fa163e34d433,ResourceVersion:21306903,Generation:2,CreationTimestamp:2020-02-11 12:06:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e9520c7f-4cc6-11ea-a994-fa163e34d433 0xc00223f717 0xc00223f718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 11 12:06:45.832: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-gnfdv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gnfdv/replicasets/test-rollover-deployment-58494b7559,UID:e9676387-4cc6-11ea-a994-fa163e34d433,ResourceVersion:21306860,Generation:2,CreationTimestamp:2020-02-11 12:06:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e9520c7f-4cc6-11ea-a994-fa163e34d433 0xc00223f7d7 0xc00223f7d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 11 12:06:45.860: INFO: Pod "test-rollover-deployment-5b8479fdb6-qhg82" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-qhg82,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-gnfdv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gnfdv/pods/test-rollover-deployment-5b8479fdb6-qhg82,UID:eb8a76d9-4cc6-11ea-a994-fa163e34d433,ResourceVersion:21306879,Generation:0,CreationTimestamp:2020-02-11 12:06:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 eaa0946a-4cc6-11ea-a994-fa163e34d433 0xc0021926a7 0xc0021926a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hp6x6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hp6x6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hp6x6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002192710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002192730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:06:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:06:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:06:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:06:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-11 12:06:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-11 12:06:33 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://8ad96b8a45dd8a8f2d85bc9f25ab9603745df8f474c2b62f5601c9466f3f6cd5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 12:06:45.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gnfdv" for this suite. Feb 11 12:06:56.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:06:56.745: INFO: namespace: e2e-tests-deployment-gnfdv, resource: bindings, ignored listing per whitelist Feb 11 12:06:56.814: INFO: namespace e2e-tests-deployment-gnfdv deletion completed in 10.939049316s • [SLOW TEST:51.185 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 12:06:56.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 11 12:06:57.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Feb 11 12:06:57.087: INFO: stderr: "" Feb 11 12:06:57.087: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Feb 11 12:06:57.094: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 12:06:57.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gg7rl" for this suite. Feb 11 12:07:03.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:07:03.282: INFO: namespace: e2e-tests-kubectl-gg7rl, resource: bindings, ignored listing per whitelist Feb 11 12:07:03.293: INFO: namespace e2e-tests-kubectl-gg7rl deletion completed in 6.192735878s S [SKIPPING] [6.479 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 11 12:06:57.094: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 12:07:03.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 11 12:07:03.647: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03e011e7-4cc7-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-p9c6k" to be "success or failure" Feb 11 12:07:03.660: INFO: Pod "downwardapi-volume-03e011e7-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.464103ms Feb 11 12:07:05.977: INFO: Pod "downwardapi-volume-03e011e7-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329881449s Feb 11 12:07:08.013: INFO: Pod "downwardapi-volume-03e011e7-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.36580179s Feb 11 12:07:10.038: INFO: Pod "downwardapi-volume-03e011e7-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390089921s Feb 11 12:07:12.060: INFO: Pod "downwardapi-volume-03e011e7-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.412815041s Feb 11 12:07:14.124: INFO: Pod "downwardapi-volume-03e011e7-4cc7-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.476791362s STEP: Saw pod success Feb 11 12:07:14.125: INFO: Pod "downwardapi-volume-03e011e7-4cc7-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 12:07:14.138: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-03e011e7-4cc7-11ea-a6e3-0242ac110005 container client-container: STEP: delete the pod Feb 11 12:07:14.562: INFO: Waiting for pod downwardapi-volume-03e011e7-4cc7-11ea-a6e3-0242ac110005 to disappear Feb 11 12:07:14.576: INFO: Pod downwardapi-volume-03e011e7-4cc7-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 12:07:14.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p9c6k" for this suite. Feb 11 12:07:20.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:07:20.902: INFO: namespace: e2e-tests-projected-p9c6k, resource: bindings, ignored listing per whitelist Feb 11 12:07:21.049: INFO: namespace e2e-tests-projected-p9c6k deletion completed in 6.463426594s • [SLOW TEST:17.755 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 12:07:21.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Feb 11 12:07:21.200: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 11 12:07:21.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:21.929: INFO: stderr: "" Feb 11 12:07:21.929: INFO: stdout: "service/redis-slave created\n" Feb 11 12:07:21.930: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 11 12:07:21.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:22.627: INFO: stderr: "" Feb 11 12:07:22.627: INFO: stdout: "service/redis-master created\n" Feb 11 12:07:22.630: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 11 12:07:22.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:23.449: INFO: stderr: "" Feb 11 12:07:23.449: INFO: stdout: "service/frontend created\n" Feb 11 12:07:23.450: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 11 12:07:23.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:24.049: INFO: stderr: "" Feb 11 12:07:24.049: INFO: stdout: "deployment.extensions/frontend created\n" Feb 11 12:07:24.051: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 11 12:07:24.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:24.928: INFO: stderr: "" Feb 11 12:07:24.929: INFO: stdout: "deployment.extensions/redis-master created\n" Feb 11 12:07:24.931: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 11 12:07:24.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:25.543: INFO: stderr: "" Feb 11 12:07:25.543: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Feb 11 12:07:25.543: INFO: Waiting for all frontend pods to be Running. Feb 11 12:07:55.598: INFO: Waiting for frontend to serve content. Feb 11 12:07:55.766: INFO: Trying to add a new entry to the guestbook. Feb 11 12:07:55.838: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 11 12:07:55.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:56.361: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 11 12:07:56.362: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 11 12:07:56.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:56.820: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 11 12:07:56.820: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 11 12:07:56.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:57.040: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 11 12:07:57.040: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 11 12:07:57.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:57.262: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 11 12:07:57.262: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 11 12:07:57.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:57.629: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 11 12:07:57.630: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 11 12:07:57.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lw4g7' Feb 11 12:07:57.832: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 11 12:07:57.833: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 12:07:57.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lw4g7" for this suite. Feb 11 12:08:44.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:08:44.139: INFO: namespace: e2e-tests-kubectl-lw4g7, resource: bindings, ignored listing per whitelist Feb 11 12:08:44.163: INFO: namespace e2e-tests-kubectl-lw4g7 deletion completed in 46.306986259s • [SLOW TEST:83.114 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 12:08:44.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-3feeebbe-4cc7-11ea-a6e3-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 11 12:08:44.461: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ffe25d6-4cc7-11ea-a6e3-0242ac110005" in namespace "e2e-tests-configmap-txdx2" to be "success or failure" Feb 11 12:08:44.471: INFO: Pod "pod-configmaps-3ffe25d6-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.096323ms Feb 11 12:08:46.512: INFO: Pod "pod-configmaps-3ffe25d6-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051402433s Feb 11 12:08:48.544: INFO: Pod "pod-configmaps-3ffe25d6-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082702326s Feb 11 12:08:50.619: INFO: Pod "pod-configmaps-3ffe25d6-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157659519s Feb 11 12:08:52.674: INFO: Pod "pod-configmaps-3ffe25d6-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.213015s Feb 11 12:08:54.741: INFO: Pod "pod-configmaps-3ffe25d6-4cc7-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.27960868s STEP: Saw pod success Feb 11 12:08:54.741: INFO: Pod "pod-configmaps-3ffe25d6-4cc7-11ea-a6e3-0242ac110005" satisfied condition "success or failure" Feb 11 12:08:54.749: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3ffe25d6-4cc7-11ea-a6e3-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 11 12:08:55.027: INFO: Waiting for pod pod-configmaps-3ffe25d6-4cc7-11ea-a6e3-0242ac110005 to disappear Feb 11 12:08:55.069: INFO: Pod pod-configmaps-3ffe25d6-4cc7-11ea-a6e3-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 12:08:55.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-txdx2" for this suite. Feb 11 12:09:01.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:09:01.585: INFO: namespace: e2e-tests-configmap-txdx2, resource: bindings, ignored listing per whitelist Feb 11 12:09:01.916: INFO: namespace e2e-tests-configmap-txdx2 deletion completed in 6.755854089s • [SLOW TEST:17.752 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 12:09:01.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-smsrk Feb 11 12:09:12.250: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-smsrk STEP: checking the pod's current state and verifying that restartCount is present Feb 11 12:09:12.262: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 11 12:13:13.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-smsrk" for this suite. Feb 11 12:13:22.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:13:22.311: INFO: namespace: e2e-tests-container-probe-smsrk, resource: bindings, ignored listing per whitelist Feb 11 12:13:22.426: INFO: namespace e2e-tests-container-probe-smsrk deletion completed in 8.45223912s • [SLOW TEST:260.509 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 11 12:13:22.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 11 12:13:23.051: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.271742ms)
Feb 11 12:13:23.055: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.375497ms)
Feb 11 12:13:23.060: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.988207ms)
Feb 11 12:13:23.066: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.315794ms)
Feb 11 12:13:23.074: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.210116ms)
Feb 11 12:13:23.079: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.806019ms)
Feb 11 12:13:23.088: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.585876ms)
Feb 11 12:13:23.093: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.307031ms)
Feb 11 12:13:23.098: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.452556ms)
Feb 11 12:13:23.103: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.369043ms)
Feb 11 12:13:23.107: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.386815ms)
Feb 11 12:13:23.113: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.180916ms)
Feb 11 12:13:23.188: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 75.506167ms)
Feb 11 12:13:23.198: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.882987ms)
Feb 11 12:13:23.205: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.662945ms)
Feb 11 12:13:23.211: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.30713ms)
Feb 11 12:13:23.217: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.197452ms)
Feb 11 12:13:23.225: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.172556ms)
Feb 11 12:13:23.229: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.058458ms)
Feb 11 12:13:23.235: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.338139ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:13:23.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-t5m48" for this suite.
Feb 11 12:13:29.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:13:29.330: INFO: namespace: e2e-tests-proxy-t5m48, resource: bindings, ignored listing per whitelist
Feb 11 12:13:29.515: INFO: namespace e2e-tests-proxy-t5m48 deletion completed in 6.274421894s

• [SLOW TEST:7.089 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:13:29.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-ea08addc-4cc7-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 11 12:13:29.713: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ea09cf30-4cc7-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-q4m62" to be "success or failure"
Feb 11 12:13:29.727: INFO: Pod "pod-projected-secrets-ea09cf30-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.039955ms
Feb 11 12:13:31.744: INFO: Pod "pod-projected-secrets-ea09cf30-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030335226s
Feb 11 12:13:33.768: INFO: Pod "pod-projected-secrets-ea09cf30-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055254422s
Feb 11 12:13:36.296: INFO: Pod "pod-projected-secrets-ea09cf30-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582532111s
Feb 11 12:13:38.316: INFO: Pod "pod-projected-secrets-ea09cf30-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.602881622s
Feb 11 12:13:40.329: INFO: Pod "pod-projected-secrets-ea09cf30-4cc7-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.616003804s
STEP: Saw pod success
Feb 11 12:13:40.329: INFO: Pod "pod-projected-secrets-ea09cf30-4cc7-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:13:40.333: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ea09cf30-4cc7-11ea-a6e3-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb 11 12:13:40.430: INFO: Waiting for pod pod-projected-secrets-ea09cf30-4cc7-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:13:40.439: INFO: Pod pod-projected-secrets-ea09cf30-4cc7-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:13:40.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q4m62" for this suite.
Feb 11 12:13:46.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:13:47.096: INFO: namespace: e2e-tests-projected-q4m62, resource: bindings, ignored listing per whitelist
Feb 11 12:13:47.163: INFO: namespace e2e-tests-projected-q4m62 deletion completed in 6.718506543s

• [SLOW TEST:17.648 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:13:47.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-f48db09c-4cc7-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 11 12:13:47.367: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f48f8643-4cc7-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-vk4zp" to be "success or failure"
Feb 11 12:13:47.381: INFO: Pod "pod-projected-configmaps-f48f8643-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.181746ms
Feb 11 12:13:49.419: INFO: Pod "pod-projected-configmaps-f48f8643-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052571762s
Feb 11 12:13:51.430: INFO: Pod "pod-projected-configmaps-f48f8643-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062854112s
Feb 11 12:13:53.928: INFO: Pod "pod-projected-configmaps-f48f8643-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.56083894s
Feb 11 12:13:55.948: INFO: Pod "pod-projected-configmaps-f48f8643-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.580786737s
Feb 11 12:13:57.968: INFO: Pod "pod-projected-configmaps-f48f8643-4cc7-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.601229223s
STEP: Saw pod success
Feb 11 12:13:57.968: INFO: Pod "pod-projected-configmaps-f48f8643-4cc7-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:13:57.982: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f48f8643-4cc7-11ea-a6e3-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 12:13:58.352: INFO: Waiting for pod pod-projected-configmaps-f48f8643-4cc7-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:13:58.373: INFO: Pod pod-projected-configmaps-f48f8643-4cc7-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:13:58.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vk4zp" for this suite.
Feb 11 12:14:04.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:14:04.457: INFO: namespace: e2e-tests-projected-vk4zp, resource: bindings, ignored listing per whitelist
Feb 11 12:14:04.624: INFO: namespace e2e-tests-projected-vk4zp deletion completed in 6.242147009s

• [SLOW TEST:17.461 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:14:04.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-fef77f55-4cc7-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 11 12:14:04.901: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fef906bd-4cc7-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-cf79f" to be "success or failure"
Feb 11 12:14:04.943: INFO: Pod "pod-projected-configmaps-fef906bd-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.327941ms
Feb 11 12:14:07.198: INFO: Pod "pod-projected-configmaps-fef906bd-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296418975s
Feb 11 12:14:09.215: INFO: Pod "pod-projected-configmaps-fef906bd-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313068822s
Feb 11 12:14:11.230: INFO: Pod "pod-projected-configmaps-fef906bd-4cc7-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328543615s
Feb 11 12:14:13.238: INFO: Pod "pod-projected-configmaps-fef906bd-4cc7-11ea-a6e3-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.336326481s
Feb 11 12:14:15.280: INFO: Pod "pod-projected-configmaps-fef906bd-4cc7-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.378561825s
STEP: Saw pod success
Feb 11 12:14:15.280: INFO: Pod "pod-projected-configmaps-fef906bd-4cc7-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:14:15.289: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-fef906bd-4cc7-11ea-a6e3-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 12:14:15.405: INFO: Waiting for pod pod-projected-configmaps-fef906bd-4cc7-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:14:15.414: INFO: Pod pod-projected-configmaps-fef906bd-4cc7-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:14:15.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cf79f" for this suite.
Feb 11 12:14:21.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:14:21.641: INFO: namespace: e2e-tests-projected-cf79f, resource: bindings, ignored listing per whitelist
Feb 11 12:14:21.842: INFO: namespace e2e-tests-projected-cf79f deletion completed in 6.413965453s

• [SLOW TEST:17.218 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:14:21.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:14:22.223: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 62.924644ms)
Feb 11 12:14:22.231: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.000844ms)
Feb 11 12:14:22.237: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.221181ms)
Feb 11 12:14:22.245: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.80848ms)
Feb 11 12:14:22.250: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.714129ms)
Feb 11 12:14:22.254: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.908662ms)
Feb 11 12:14:22.258: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.974258ms)
Feb 11 12:14:22.262: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.776417ms)
Feb 11 12:14:22.266: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.357835ms)
Feb 11 12:14:22.271: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.721614ms)
Feb 11 12:14:22.276: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.621188ms)
Feb 11 12:14:22.280: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.491232ms)
Feb 11 12:14:22.287: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.785723ms)
Feb 11 12:14:22.298: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.269363ms)
Feb 11 12:14:22.311: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.663559ms)
Feb 11 12:14:22.325: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.293781ms)
Feb 11 12:14:22.334: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.989505ms)
Feb 11 12:14:22.347: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.482927ms)
Feb 11 12:14:22.365: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.530975ms)
Feb 11 12:14:22.373: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.138456ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:14:22.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-68fz4" for this suite.
Feb 11 12:14:28.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:14:28.692: INFO: namespace: e2e-tests-proxy-68fz4, resource: bindings, ignored listing per whitelist
Feb 11 12:14:28.696: INFO: namespace e2e-tests-proxy-68fz4 deletion completed in 6.312921324s

• [SLOW TEST:6.853 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:14:28.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 11 12:14:28.906: INFO: Waiting up to 5m0s for pod "pod-0d4e91db-4cc8-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-xtlgj" to be "success or failure"
Feb 11 12:14:28.913: INFO: Pod "pod-0d4e91db-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.010056ms
Feb 11 12:14:31.138: INFO: Pod "pod-0d4e91db-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231616181s
Feb 11 12:14:33.156: INFO: Pod "pod-0d4e91db-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249329051s
Feb 11 12:14:35.218: INFO: Pod "pod-0d4e91db-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311385344s
Feb 11 12:14:37.235: INFO: Pod "pod-0d4e91db-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329044349s
Feb 11 12:14:39.329: INFO: Pod "pod-0d4e91db-4cc8-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.423011458s
STEP: Saw pod success
Feb 11 12:14:39.329: INFO: Pod "pod-0d4e91db-4cc8-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:14:39.338: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0d4e91db-4cc8-11ea-a6e3-0242ac110005 container test-container: 
STEP: delete the pod
Feb 11 12:14:39.516: INFO: Waiting for pod pod-0d4e91db-4cc8-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:14:39.541: INFO: Pod pod-0d4e91db-4cc8-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:14:39.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xtlgj" for this suite.
Feb 11 12:14:46.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:14:46.420: INFO: namespace: e2e-tests-emptydir-xtlgj, resource: bindings, ignored listing per whitelist
Feb 11 12:14:46.610: INFO: namespace e2e-tests-emptydir-xtlgj deletion completed in 7.048075333s

• [SLOW TEST:17.914 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:14:46.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-gds4s
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gds4s to expose endpoints map[]
Feb 11 12:14:47.004: INFO: Get endpoints failed (13.028449ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 11 12:14:48.016: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gds4s exposes endpoints map[] (1.024738181s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-gds4s
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gds4s to expose endpoints map[pod1:[80]]
Feb 11 12:14:53.979: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.943506061s elapsed, will retry)
Feb 11 12:14:59.669: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gds4s exposes endpoints map[pod1:[80]] (11.633106785s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-gds4s
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gds4s to expose endpoints map[pod1:[80] pod2:[80]]
Feb 11 12:15:04.078: INFO: Unexpected endpoints: found map[18ba33ff-4cc8-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.386303928s elapsed, will retry)
Feb 11 12:15:08.967: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gds4s exposes endpoints map[pod1:[80] pod2:[80]] (9.275597752s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-gds4s
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gds4s to expose endpoints map[pod2:[80]]
Feb 11 12:15:09.024: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gds4s exposes endpoints map[pod2:[80]] (12.312851ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-gds4s
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gds4s to expose endpoints map[]
Feb 11 12:15:10.055: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gds4s exposes endpoints map[] (1.024316831s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:15:10.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-gds4s" for this suite.
Feb 11 12:15:32.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:15:32.308: INFO: namespace: e2e-tests-services-gds4s, resource: bindings, ignored listing per whitelist
Feb 11 12:15:32.455: INFO: namespace e2e-tests-services-gds4s deletion completed in 22.278061706s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:45.844 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:15:32.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 11 12:15:32.762: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33625bfe-4cc8-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-vzdcn" to be "success or failure"
Feb 11 12:15:32.801: INFO: Pod "downwardapi-volume-33625bfe-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.034983ms
Feb 11 12:15:35.389: INFO: Pod "downwardapi-volume-33625bfe-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.626207s
Feb 11 12:15:37.405: INFO: Pod "downwardapi-volume-33625bfe-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.641563504s
Feb 11 12:15:40.173: INFO: Pod "downwardapi-volume-33625bfe-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.410062562s
Feb 11 12:15:42.199: INFO: Pod "downwardapi-volume-33625bfe-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.436503104s
Feb 11 12:15:44.242: INFO: Pod "downwardapi-volume-33625bfe-4cc8-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.47893354s
STEP: Saw pod success
Feb 11 12:15:44.242: INFO: Pod "downwardapi-volume-33625bfe-4cc8-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:15:44.250: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-33625bfe-4cc8-11ea-a6e3-0242ac110005 container client-container: 
STEP: delete the pod
Feb 11 12:15:44.505: INFO: Waiting for pod downwardapi-volume-33625bfe-4cc8-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:15:44.616: INFO: Pod downwardapi-volume-33625bfe-4cc8-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:15:44.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vzdcn" for this suite.
Feb 11 12:15:50.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:15:50.769: INFO: namespace: e2e-tests-projected-vzdcn, resource: bindings, ignored listing per whitelist
Feb 11 12:15:50.954: INFO: namespace e2e-tests-projected-vzdcn deletion completed in 6.321767665s

• [SLOW TEST:18.499 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:15:50.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:15:51.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:16:03.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6wwg5" for this suite.
Feb 11 12:16:48.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:16:48.245: INFO: namespace: e2e-tests-pods-6wwg5, resource: bindings, ignored listing per whitelist
Feb 11 12:16:48.281: INFO: namespace e2e-tests-pods-6wwg5 deletion completed in 44.371789405s

• [SLOW TEST:57.326 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:16:48.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 11 12:16:48.552: INFO: Waiting up to 5m0s for pod "downwardapi-volume-607fbcfc-4cc8-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-gxgpk" to be "success or failure"
Feb 11 12:16:48.591: INFO: Pod "downwardapi-volume-607fbcfc-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.520911ms
Feb 11 12:16:50.828: INFO: Pod "downwardapi-volume-607fbcfc-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276042648s
Feb 11 12:16:52.863: INFO: Pod "downwardapi-volume-607fbcfc-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311008831s
Feb 11 12:16:54.893: INFO: Pod "downwardapi-volume-607fbcfc-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.341053375s
Feb 11 12:16:57.202: INFO: Pod "downwardapi-volume-607fbcfc-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.649798615s
Feb 11 12:16:59.216: INFO: Pod "downwardapi-volume-607fbcfc-4cc8-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.664039464s
STEP: Saw pod success
Feb 11 12:16:59.217: INFO: Pod "downwardapi-volume-607fbcfc-4cc8-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:16:59.223: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-607fbcfc-4cc8-11ea-a6e3-0242ac110005 container client-container: 
STEP: delete the pod
Feb 11 12:16:59.777: INFO: Waiting for pod downwardapi-volume-607fbcfc-4cc8-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:16:59.788: INFO: Pod downwardapi-volume-607fbcfc-4cc8-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:16:59.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gxgpk" for this suite.
Feb 11 12:17:05.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:17:05.979: INFO: namespace: e2e-tests-downward-api-gxgpk, resource: bindings, ignored listing per whitelist
Feb 11 12:17:06.254: INFO: namespace e2e-tests-downward-api-gxgpk deletion completed in 6.453225707s

• [SLOW TEST:17.973 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:17:06.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 11 12:17:17.191: INFO: Successfully updated pod "annotationupdate6b422d94-4cc8-11ea-a6e3-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:17:19.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ss9hq" for this suite.
Feb 11 12:17:43.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:17:43.742: INFO: namespace: e2e-tests-downward-api-ss9hq, resource: bindings, ignored listing per whitelist
Feb 11 12:17:43.754: INFO: namespace e2e-tests-downward-api-ss9hq deletion completed in 24.454375028s

• [SLOW TEST:37.499 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:17:43.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-pm7rv
I0211 12:17:44.279345       9 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-pm7rv, replica count: 1
I0211 12:17:45.330514       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:17:46.331193       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:17:47.331915       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:17:48.332696       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:17:49.333562       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:17:50.333998       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:17:51.334492       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:17:52.335442       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:17:53.335968       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:17:54.336536       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:17:55.337310       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:17:56.337907       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 11 12:17:56.474: INFO: Created: latency-svc-dsk9p
Feb 11 12:17:56.678: INFO: Got endpoints: latency-svc-dsk9p [239.471558ms]
Feb 11 12:17:56.818: INFO: Created: latency-svc-df7wv
Feb 11 12:17:56.876: INFO: Created: latency-svc-k5cvb
Feb 11 12:17:56.895: INFO: Got endpoints: latency-svc-df7wv [215.551686ms]
Feb 11 12:17:57.045: INFO: Got endpoints: latency-svc-k5cvb [364.868916ms]
Feb 11 12:17:57.069: INFO: Created: latency-svc-ghsjm
Feb 11 12:17:57.085: INFO: Got endpoints: latency-svc-ghsjm [407.029817ms]
Feb 11 12:17:57.136: INFO: Created: latency-svc-9mpf4
Feb 11 12:17:57.241: INFO: Got endpoints: latency-svc-9mpf4 [561.940238ms]
Feb 11 12:17:57.277: INFO: Created: latency-svc-brhxl
Feb 11 12:17:57.293: INFO: Got endpoints: latency-svc-brhxl [615.347508ms]
Feb 11 12:17:57.550: INFO: Created: latency-svc-r5dwz
Feb 11 12:17:57.572: INFO: Got endpoints: latency-svc-r5dwz [329.587783ms]
Feb 11 12:17:57.791: INFO: Created: latency-svc-xhhm9
Feb 11 12:17:57.921: INFO: Got endpoints: latency-svc-xhhm9 [1.241645806s]
Feb 11 12:17:58.200: INFO: Created: latency-svc-v9k4b
Feb 11 12:17:58.208: INFO: Got endpoints: latency-svc-v9k4b [1.529767789s]
Feb 11 12:17:58.379: INFO: Created: latency-svc-fwb8h
Feb 11 12:17:58.402: INFO: Got endpoints: latency-svc-fwb8h [1.72208839s]
Feb 11 12:17:58.543: INFO: Created: latency-svc-7wlwl
Feb 11 12:17:58.643: INFO: Got endpoints: latency-svc-7wlwl [1.962910582s]
Feb 11 12:17:58.723: INFO: Created: latency-svc-6g8wp
Feb 11 12:17:58.921: INFO: Got endpoints: latency-svc-6g8wp [2.2410948s]
Feb 11 12:17:58.964: INFO: Created: latency-svc-5xzxf
Feb 11 12:17:58.989: INFO: Got endpoints: latency-svc-5xzxf [2.311264081s]
Feb 11 12:17:59.281: INFO: Created: latency-svc-smdlw
Feb 11 12:17:59.468: INFO: Got endpoints: latency-svc-smdlw [2.788235182s]
Feb 11 12:17:59.499: INFO: Created: latency-svc-sm56g
Feb 11 12:17:59.508: INFO: Got endpoints: latency-svc-sm56g [2.829824316s]
Feb 11 12:17:59.758: INFO: Created: latency-svc-h888b
Feb 11 12:17:59.773: INFO: Got endpoints: latency-svc-h888b [3.092603777s]
Feb 11 12:17:59.924: INFO: Created: latency-svc-wkgqq
Feb 11 12:17:59.946: INFO: Got endpoints: latency-svc-wkgqq [3.266096224s]
Feb 11 12:18:00.136: INFO: Created: latency-svc-hslpz
Feb 11 12:18:00.166: INFO: Got endpoints: latency-svc-hslpz [3.271180335s]
Feb 11 12:18:00.248: INFO: Created: latency-svc-95jsh
Feb 11 12:18:00.388: INFO: Got endpoints: latency-svc-95jsh [3.34285008s]
Feb 11 12:18:00.419: INFO: Created: latency-svc-q6zbf
Feb 11 12:18:00.481: INFO: Got endpoints: latency-svc-q6zbf [3.395710053s]
Feb 11 12:18:00.517: INFO: Created: latency-svc-7m6lv
Feb 11 12:18:00.733: INFO: Got endpoints: latency-svc-7m6lv [3.438934954s]
Feb 11 12:18:00.759: INFO: Created: latency-svc-zs4bk
Feb 11 12:18:00.862: INFO: Got endpoints: latency-svc-zs4bk [3.290244894s]
Feb 11 12:18:00.938: INFO: Created: latency-svc-jz7p5
Feb 11 12:18:01.155: INFO: Got endpoints: latency-svc-jz7p5 [3.232695465s]
Feb 11 12:18:01.198: INFO: Created: latency-svc-d849p
Feb 11 12:18:01.408: INFO: Got endpoints: latency-svc-d849p [3.199778025s]
Feb 11 12:18:01.446: INFO: Created: latency-svc-jjqlq
Feb 11 12:18:01.463: INFO: Got endpoints: latency-svc-jjqlq [3.06136477s]
Feb 11 12:18:01.662: INFO: Created: latency-svc-clg2r
Feb 11 12:18:01.675: INFO: Got endpoints: latency-svc-clg2r [3.031175695s]
Feb 11 12:18:01.820: INFO: Created: latency-svc-rqkxc
Feb 11 12:18:01.849: INFO: Got endpoints: latency-svc-rqkxc [2.928050136s]
Feb 11 12:18:01.992: INFO: Created: latency-svc-fcq2f
Feb 11 12:18:01.997: INFO: Got endpoints: latency-svc-fcq2f [3.007105783s]
Feb 11 12:18:02.187: INFO: Created: latency-svc-98p8s
Feb 11 12:18:02.213: INFO: Got endpoints: latency-svc-98p8s [2.74411261s]
Feb 11 12:18:02.270: INFO: Created: latency-svc-2b2tf
Feb 11 12:18:02.481: INFO: Got endpoints: latency-svc-2b2tf [2.973141637s]
Feb 11 12:18:02.519: INFO: Created: latency-svc-zt26k
Feb 11 12:18:02.776: INFO: Got endpoints: latency-svc-zt26k [3.002972482s]
Feb 11 12:18:02.896: INFO: Created: latency-svc-2sdcp
Feb 11 12:18:03.064: INFO: Got endpoints: latency-svc-2sdcp [3.117613769s]
Feb 11 12:18:03.081: INFO: Created: latency-svc-hvdkt
Feb 11 12:18:03.121: INFO: Got endpoints: latency-svc-hvdkt [2.953975635s]
Feb 11 12:18:03.236: INFO: Created: latency-svc-5lxqh
Feb 11 12:18:03.260: INFO: Got endpoints: latency-svc-5lxqh [2.871435412s]
Feb 11 12:18:03.310: INFO: Created: latency-svc-fds4j
Feb 11 12:18:03.454: INFO: Got endpoints: latency-svc-fds4j [2.972064411s]
Feb 11 12:18:03.485: INFO: Created: latency-svc-sqnss
Feb 11 12:18:03.502: INFO: Got endpoints: latency-svc-sqnss [2.768887004s]
Feb 11 12:18:03.656: INFO: Created: latency-svc-sgqhd
Feb 11 12:18:03.677: INFO: Got endpoints: latency-svc-sgqhd [2.814505373s]
Feb 11 12:18:03.827: INFO: Created: latency-svc-tzmpd
Feb 11 12:18:03.864: INFO: Got endpoints: latency-svc-tzmpd [2.708408053s]
Feb 11 12:18:04.004: INFO: Created: latency-svc-4llh9
Feb 11 12:18:04.040: INFO: Got endpoints: latency-svc-4llh9 [2.631903805s]
Feb 11 12:18:04.094: INFO: Created: latency-svc-xvxgb
Feb 11 12:18:04.240: INFO: Got endpoints: latency-svc-xvxgb [2.776044479s]
Feb 11 12:18:04.281: INFO: Created: latency-svc-x2bnl
Feb 11 12:18:04.480: INFO: Got endpoints: latency-svc-x2bnl [2.804562143s]
Feb 11 12:18:04.517: INFO: Created: latency-svc-7qhhm
Feb 11 12:18:04.526: INFO: Got endpoints: latency-svc-7qhhm [2.676137821s]
Feb 11 12:18:04.718: INFO: Created: latency-svc-4vnqd
Feb 11 12:18:04.722: INFO: Got endpoints: latency-svc-4vnqd [2.725237954s]
Feb 11 12:18:04.770: INFO: Created: latency-svc-v84cd
Feb 11 12:18:04.779: INFO: Got endpoints: latency-svc-v84cd [2.565775948s]
Feb 11 12:18:04.940: INFO: Created: latency-svc-wr9dw
Feb 11 12:18:04.943: INFO: Got endpoints: latency-svc-wr9dw [2.461257657s]
Feb 11 12:18:05.140: INFO: Created: latency-svc-mbzt2
Feb 11 12:18:05.179: INFO: Got endpoints: latency-svc-mbzt2 [2.402857029s]
Feb 11 12:18:05.325: INFO: Created: latency-svc-zk4dm
Feb 11 12:18:05.386: INFO: Got endpoints: latency-svc-zk4dm [2.321030456s]
Feb 11 12:18:05.502: INFO: Created: latency-svc-qd866
Feb 11 12:18:05.521: INFO: Got endpoints: latency-svc-qd866 [2.400562263s]
Feb 11 12:18:05.613: INFO: Created: latency-svc-97q2m
Feb 11 12:18:05.637: INFO: Got endpoints: latency-svc-97q2m [2.376281539s]
Feb 11 12:18:05.771: INFO: Created: latency-svc-2f767
Feb 11 12:18:05.792: INFO: Got endpoints: latency-svc-2f767 [2.337280855s]
Feb 11 12:18:05.983: INFO: Created: latency-svc-rbtqn
Feb 11 12:18:06.029: INFO: Got endpoints: latency-svc-rbtqn [2.526815591s]
Feb 11 12:18:06.068: INFO: Created: latency-svc-5dxh2
Feb 11 12:18:06.185: INFO: Got endpoints: latency-svc-5dxh2 [2.507830056s]
Feb 11 12:18:06.260: INFO: Created: latency-svc-5sf7r
Feb 11 12:18:06.353: INFO: Got endpoints: latency-svc-5sf7r [2.488731782s]
Feb 11 12:18:06.421: INFO: Created: latency-svc-8pwtc
Feb 11 12:18:06.454: INFO: Got endpoints: latency-svc-8pwtc [2.413693713s]
Feb 11 12:18:06.596: INFO: Created: latency-svc-b7nmn
Feb 11 12:18:06.668: INFO: Got endpoints: latency-svc-b7nmn [2.427185289s]
Feb 11 12:18:06.768: INFO: Created: latency-svc-d8hd9
Feb 11 12:18:06.997: INFO: Created: latency-svc-8m9q9
Feb 11 12:18:07.021: INFO: Got endpoints: latency-svc-8m9q9 [2.494832668s]
Feb 11 12:18:07.030: INFO: Got endpoints: latency-svc-d8hd9 [2.550449255s]
Feb 11 12:18:07.067: INFO: Created: latency-svc-rrgrd
Feb 11 12:18:07.078: INFO: Got endpoints: latency-svc-rrgrd [2.355103611s]
Feb 11 12:18:07.278: INFO: Created: latency-svc-697fj
Feb 11 12:18:07.316: INFO: Got endpoints: latency-svc-697fj [2.53720116s]
Feb 11 12:18:07.554: INFO: Created: latency-svc-b7v4l
Feb 11 12:18:07.570: INFO: Got endpoints: latency-svc-b7v4l [2.627687631s]
Feb 11 12:18:07.703: INFO: Created: latency-svc-b2dk8
Feb 11 12:18:07.731: INFO: Got endpoints: latency-svc-b2dk8 [2.551223818s]
Feb 11 12:18:07.903: INFO: Created: latency-svc-62qdz
Feb 11 12:18:07.907: INFO: Got endpoints: latency-svc-62qdz [2.521430131s]
Feb 11 12:18:07.967: INFO: Created: latency-svc-l998j
Feb 11 12:18:08.054: INFO: Got endpoints: latency-svc-l998j [2.53246942s]
Feb 11 12:18:08.093: INFO: Created: latency-svc-67k4s
Feb 11 12:18:08.104: INFO: Got endpoints: latency-svc-67k4s [2.46718457s]
Feb 11 12:18:08.150: INFO: Created: latency-svc-m7tq9
Feb 11 12:18:08.260: INFO: Got endpoints: latency-svc-m7tq9 [2.468530929s]
Feb 11 12:18:08.310: INFO: Created: latency-svc-j2mzc
Feb 11 12:18:08.389: INFO: Got endpoints: latency-svc-j2mzc [2.360098567s]
Feb 11 12:18:08.423: INFO: Created: latency-svc-2kpdx
Feb 11 12:18:08.438: INFO: Got endpoints: latency-svc-2kpdx [2.252691408s]
Feb 11 12:18:08.486: INFO: Created: latency-svc-jt4kq
Feb 11 12:18:08.614: INFO: Got endpoints: latency-svc-jt4kq [2.260049822s]
Feb 11 12:18:08.644: INFO: Created: latency-svc-c84gh
Feb 11 12:18:08.680: INFO: Got endpoints: latency-svc-c84gh [2.225776346s]
Feb 11 12:18:08.837: INFO: Created: latency-svc-rwkbw
Feb 11 12:18:08.865: INFO: Got endpoints: latency-svc-rwkbw [2.196570679s]
Feb 11 12:18:08.933: INFO: Created: latency-svc-5w8wv
Feb 11 12:18:09.016: INFO: Got endpoints: latency-svc-5w8wv [1.995430018s]
Feb 11 12:18:09.081: INFO: Created: latency-svc-cfnr7
Feb 11 12:18:09.084: INFO: Got endpoints: latency-svc-cfnr7 [2.053619736s]
Feb 11 12:18:09.310: INFO: Created: latency-svc-54rhr
Feb 11 12:18:09.318: INFO: Got endpoints: latency-svc-54rhr [2.24030559s]
Feb 11 12:18:09.487: INFO: Created: latency-svc-bxlcb
Feb 11 12:18:09.510: INFO: Got endpoints: latency-svc-bxlcb [2.193262834s]
Feb 11 12:18:09.570: INFO: Created: latency-svc-dh4lc
Feb 11 12:18:09.666: INFO: Got endpoints: latency-svc-dh4lc [2.095597554s]
Feb 11 12:18:09.762: INFO: Created: latency-svc-znslk
Feb 11 12:18:09.885: INFO: Got endpoints: latency-svc-znslk [2.15357521s]
Feb 11 12:18:09.905: INFO: Created: latency-svc-kv2gl
Feb 11 12:18:09.932: INFO: Got endpoints: latency-svc-kv2gl [2.024565787s]
Feb 11 12:18:10.113: INFO: Created: latency-svc-fqzhq
Feb 11 12:18:10.124: INFO: Got endpoints: latency-svc-fqzhq [2.068413191s]
Feb 11 12:18:10.265: INFO: Created: latency-svc-7pjjf
Feb 11 12:18:10.279: INFO: Got endpoints: latency-svc-7pjjf [2.175196324s]
Feb 11 12:18:10.347: INFO: Created: latency-svc-rwnb5
Feb 11 12:18:10.448: INFO: Got endpoints: latency-svc-rwnb5 [2.187358742s]
Feb 11 12:18:10.484: INFO: Created: latency-svc-lrrfx
Feb 11 12:18:10.523: INFO: Got endpoints: latency-svc-lrrfx [2.133913958s]
Feb 11 12:18:10.799: INFO: Created: latency-svc-xpdwq
Feb 11 12:18:10.827: INFO: Got endpoints: latency-svc-xpdwq [2.389038115s]
Feb 11 12:18:10.999: INFO: Created: latency-svc-2nmhn
Feb 11 12:18:11.041: INFO: Got endpoints: latency-svc-2nmhn [2.426829224s]
Feb 11 12:18:11.231: INFO: Created: latency-svc-65fpp
Feb 11 12:18:11.495: INFO: Got endpoints: latency-svc-65fpp [2.814481457s]
Feb 11 12:18:11.527: INFO: Created: latency-svc-n8vs8
Feb 11 12:18:11.555: INFO: Got endpoints: latency-svc-n8vs8 [2.689302463s]
Feb 11 12:18:11.665: INFO: Created: latency-svc-xk6zd
Feb 11 12:18:11.738: INFO: Got endpoints: latency-svc-xk6zd [2.721019691s]
Feb 11 12:18:12.568: INFO: Created: latency-svc-ksfrf
Feb 11 12:18:12.790: INFO: Got endpoints: latency-svc-ksfrf [3.706115292s]
Feb 11 12:18:12.833: INFO: Created: latency-svc-bmgwf
Feb 11 12:18:12.864: INFO: Got endpoints: latency-svc-bmgwf [3.545462735s]
Feb 11 12:18:13.020: INFO: Created: latency-svc-vf6pr
Feb 11 12:18:13.020: INFO: Got endpoints: latency-svc-vf6pr [3.509552139s]
Feb 11 12:18:13.065: INFO: Created: latency-svc-nxxms
Feb 11 12:18:13.079: INFO: Got endpoints: latency-svc-nxxms [3.412329744s]
Feb 11 12:18:13.209: INFO: Created: latency-svc-wddqf
Feb 11 12:18:13.239: INFO: Got endpoints: latency-svc-wddqf [3.353644906s]
Feb 11 12:18:13.289: INFO: Created: latency-svc-2sj6n
Feb 11 12:18:13.417: INFO: Got endpoints: latency-svc-2sj6n [3.483958347s]
Feb 11 12:18:13.454: INFO: Created: latency-svc-b2gn7
Feb 11 12:18:13.493: INFO: Got endpoints: latency-svc-b2gn7 [3.369310262s]
Feb 11 12:18:13.639: INFO: Created: latency-svc-rcb72
Feb 11 12:18:13.676: INFO: Got endpoints: latency-svc-rcb72 [3.396124142s]
Feb 11 12:18:13.742: INFO: Created: latency-svc-hdmbh
Feb 11 12:18:13.840: INFO: Got endpoints: latency-svc-hdmbh [3.391060902s]
Feb 11 12:18:13.878: INFO: Created: latency-svc-xqpfp
Feb 11 12:18:13.902: INFO: Got endpoints: latency-svc-xqpfp [3.378582986s]
Feb 11 12:18:14.094: INFO: Created: latency-svc-bcbqx
Feb 11 12:18:15.066: INFO: Got endpoints: latency-svc-bcbqx [4.238063777s]
Feb 11 12:18:15.161: INFO: Created: latency-svc-tslrb
Feb 11 12:18:15.420: INFO: Got endpoints: latency-svc-tslrb [4.378006338s]
Feb 11 12:18:15.789: INFO: Created: latency-svc-ghbg6
Feb 11 12:18:15.995: INFO: Got endpoints: latency-svc-ghbg6 [4.499169002s]
Feb 11 12:18:16.003: INFO: Created: latency-svc-ms62s
Feb 11 12:18:16.016: INFO: Got endpoints: latency-svc-ms62s [4.461526974s]
Feb 11 12:18:16.227: INFO: Created: latency-svc-swmtm
Feb 11 12:18:16.249: INFO: Got endpoints: latency-svc-swmtm [4.510849616s]
Feb 11 12:18:16.450: INFO: Created: latency-svc-x428p
Feb 11 12:18:16.458: INFO: Got endpoints: latency-svc-x428p [3.667285975s]
Feb 11 12:18:16.712: INFO: Created: latency-svc-dzrj9
Feb 11 12:18:16.713: INFO: Got endpoints: latency-svc-dzrj9 [3.848169091s]
Feb 11 12:18:16.873: INFO: Created: latency-svc-dt5jb
Feb 11 12:18:16.882: INFO: Got endpoints: latency-svc-dt5jb [3.862338632s]
Feb 11 12:18:16.944: INFO: Created: latency-svc-gnntm
Feb 11 12:18:17.045: INFO: Got endpoints: latency-svc-gnntm [3.965776579s]
Feb 11 12:18:17.049: INFO: Created: latency-svc-4xxpz
Feb 11 12:18:17.082: INFO: Got endpoints: latency-svc-4xxpz [3.842755688s]
Feb 11 12:18:17.128: INFO: Created: latency-svc-zbcmh
Feb 11 12:18:17.228: INFO: Got endpoints: latency-svc-zbcmh [3.810812677s]
Feb 11 12:18:17.249: INFO: Created: latency-svc-mlc2w
Feb 11 12:18:17.294: INFO: Got endpoints: latency-svc-mlc2w [3.800528167s]
Feb 11 12:18:17.488: INFO: Created: latency-svc-8wzmq
Feb 11 12:18:17.506: INFO: Got endpoints: latency-svc-8wzmq [3.829893227s]
Feb 11 12:18:17.570: INFO: Created: latency-svc-czxpz
Feb 11 12:18:17.711: INFO: Got endpoints: latency-svc-czxpz [3.871062664s]
Feb 11 12:18:17.754: INFO: Created: latency-svc-grqz8
Feb 11 12:18:17.786: INFO: Got endpoints: latency-svc-grqz8 [3.882982427s]
Feb 11 12:18:17.939: INFO: Created: latency-svc-kvb4w
Feb 11 12:18:18.099: INFO: Got endpoints: latency-svc-kvb4w [3.03286753s]
Feb 11 12:18:18.101: INFO: Created: latency-svc-ph4rz
Feb 11 12:18:18.166: INFO: Got endpoints: latency-svc-ph4rz [2.745328345s]
Feb 11 12:18:18.328: INFO: Created: latency-svc-nl8tv
Feb 11 12:18:18.340: INFO: Got endpoints: latency-svc-nl8tv [2.345142579s]
Feb 11 12:18:18.503: INFO: Created: latency-svc-gmcsq
Feb 11 12:18:18.515: INFO: Got endpoints: latency-svc-gmcsq [2.498165797s]
Feb 11 12:18:18.686: INFO: Created: latency-svc-pqtkl
Feb 11 12:18:18.759: INFO: Got endpoints: latency-svc-pqtkl [2.510139817s]
Feb 11 12:18:18.881: INFO: Created: latency-svc-5rkxf
Feb 11 12:18:18.947: INFO: Created: latency-svc-48pdh
Feb 11 12:18:18.948: INFO: Got endpoints: latency-svc-5rkxf [2.4901918s]
Feb 11 12:18:19.027: INFO: Got endpoints: latency-svc-48pdh [2.313864243s]
Feb 11 12:18:19.060: INFO: Created: latency-svc-t6qv9
Feb 11 12:18:19.078: INFO: Got endpoints: latency-svc-t6qv9 [2.195229768s]
Feb 11 12:18:19.121: INFO: Created: latency-svc-rcb6f
Feb 11 12:18:19.173: INFO: Got endpoints: latency-svc-rcb6f [2.127947684s]
Feb 11 12:18:19.215: INFO: Created: latency-svc-4hdd4
Feb 11 12:18:19.217: INFO: Got endpoints: latency-svc-4hdd4 [2.134755222s]
Feb 11 12:18:19.264: INFO: Created: latency-svc-c4fz6
Feb 11 12:18:19.453: INFO: Got endpoints: latency-svc-c4fz6 [2.224930162s]
Feb 11 12:18:19.532: INFO: Created: latency-svc-d2vhh
Feb 11 12:18:19.620: INFO: Got endpoints: latency-svc-d2vhh [2.325415682s]
Feb 11 12:18:19.680: INFO: Created: latency-svc-rh4xj
Feb 11 12:18:19.861: INFO: Got endpoints: latency-svc-rh4xj [2.35515704s]
Feb 11 12:18:19.886: INFO: Created: latency-svc-dwlmb
Feb 11 12:18:19.947: INFO: Got endpoints: latency-svc-dwlmb [2.23533844s]
Feb 11 12:18:20.063: INFO: Created: latency-svc-smf5n
Feb 11 12:18:20.081: INFO: Got endpoints: latency-svc-smf5n [2.295007288s]
Feb 11 12:18:20.221: INFO: Created: latency-svc-mnmjd
Feb 11 12:18:20.244: INFO: Got endpoints: latency-svc-mnmjd [2.144534912s]
Feb 11 12:18:20.253: INFO: Created: latency-svc-zdg6q
Feb 11 12:18:20.360: INFO: Got endpoints: latency-svc-zdg6q [2.193418707s]
Feb 11 12:18:20.399: INFO: Created: latency-svc-sz4x8
Feb 11 12:18:20.403: INFO: Got endpoints: latency-svc-sz4x8 [2.06172899s]
Feb 11 12:18:20.538: INFO: Created: latency-svc-ncg94
Feb 11 12:18:20.578: INFO: Got endpoints: latency-svc-ncg94 [2.062802964s]
Feb 11 12:18:20.722: INFO: Created: latency-svc-9qld7
Feb 11 12:18:20.755: INFO: Got endpoints: latency-svc-9qld7 [1.995277384s]
Feb 11 12:18:20.920: INFO: Created: latency-svc-cwshl
Feb 11 12:18:20.945: INFO: Got endpoints: latency-svc-cwshl [1.996634163s]
Feb 11 12:18:21.004: INFO: Created: latency-svc-mtd5z
Feb 11 12:18:21.066: INFO: Got endpoints: latency-svc-mtd5z [2.039159587s]
Feb 11 12:18:21.138: INFO: Created: latency-svc-kzzc2
Feb 11 12:18:21.161: INFO: Got endpoints: latency-svc-kzzc2 [2.083149022s]
Feb 11 12:18:21.258: INFO: Created: latency-svc-zcxrg
Feb 11 12:18:21.269: INFO: Got endpoints: latency-svc-zcxrg [2.095793601s]
Feb 11 12:18:21.335: INFO: Created: latency-svc-8nqjp
Feb 11 12:18:21.479: INFO: Got endpoints: latency-svc-8nqjp [2.26116324s]
Feb 11 12:18:21.522: INFO: Created: latency-svc-swdk8
Feb 11 12:18:21.534: INFO: Got endpoints: latency-svc-swdk8 [2.079887032s]
Feb 11 12:18:21.664: INFO: Created: latency-svc-5f68r
Feb 11 12:18:21.707: INFO: Got endpoints: latency-svc-5f68r [2.086701252s]
Feb 11 12:18:21.928: INFO: Created: latency-svc-wnp7z
Feb 11 12:18:21.935: INFO: Got endpoints: latency-svc-wnp7z [2.072806024s]
Feb 11 12:18:22.115: INFO: Created: latency-svc-486zr
Feb 11 12:18:22.160: INFO: Got endpoints: latency-svc-486zr [2.212368142s]
Feb 11 12:18:22.327: INFO: Created: latency-svc-wmxnn
Feb 11 12:18:22.333: INFO: Got endpoints: latency-svc-wmxnn [2.251867061s]
Feb 11 12:18:22.499: INFO: Created: latency-svc-b488z
Feb 11 12:18:22.526: INFO: Got endpoints: latency-svc-b488z [2.281859277s]
Feb 11 12:18:22.677: INFO: Created: latency-svc-ks72l
Feb 11 12:18:22.690: INFO: Got endpoints: latency-svc-ks72l [2.330046648s]
Feb 11 12:18:22.745: INFO: Created: latency-svc-gjfcp
Feb 11 12:18:22.939: INFO: Got endpoints: latency-svc-gjfcp [2.536283298s]
Feb 11 12:18:23.050: INFO: Created: latency-svc-b7xwr
Feb 11 12:18:23.130: INFO: Got endpoints: latency-svc-b7xwr [2.552477861s]
Feb 11 12:18:23.164: INFO: Created: latency-svc-zg8fm
Feb 11 12:18:23.170: INFO: Got endpoints: latency-svc-zg8fm [2.414353262s]
Feb 11 12:18:23.249: INFO: Created: latency-svc-hsxgj
Feb 11 12:18:23.429: INFO: Got endpoints: latency-svc-hsxgj [2.48411359s]
Feb 11 12:18:23.472: INFO: Created: latency-svc-9pnb2
Feb 11 12:18:23.499: INFO: Got endpoints: latency-svc-9pnb2 [2.432465768s]
Feb 11 12:18:24.194: INFO: Created: latency-svc-dtwt8
Feb 11 12:18:24.194: INFO: Got endpoints: latency-svc-dtwt8 [3.033059693s]
Feb 11 12:18:24.354: INFO: Created: latency-svc-kh28h
Feb 11 12:18:24.398: INFO: Got endpoints: latency-svc-kh28h [3.127981089s]
Feb 11 12:18:24.608: INFO: Created: latency-svc-kp8hk
Feb 11 12:18:24.635: INFO: Got endpoints: latency-svc-kp8hk [3.156302819s]
Feb 11 12:18:24.795: INFO: Created: latency-svc-9qgpf
Feb 11 12:18:24.836: INFO: Got endpoints: latency-svc-9qgpf [3.302079657s]
Feb 11 12:18:25.024: INFO: Created: latency-svc-h8h8f
Feb 11 12:18:25.030: INFO: Got endpoints: latency-svc-h8h8f [3.322770612s]
Feb 11 12:18:25.192: INFO: Created: latency-svc-s9klr
Feb 11 12:18:25.219: INFO: Got endpoints: latency-svc-s9klr [3.284048276s]
Feb 11 12:18:25.256: INFO: Created: latency-svc-vwkwd
Feb 11 12:18:25.259: INFO: Got endpoints: latency-svc-vwkwd [3.098749869s]
Feb 11 12:18:25.380: INFO: Created: latency-svc-pkpql
Feb 11 12:18:25.397: INFO: Got endpoints: latency-svc-pkpql [3.063596983s]
Feb 11 12:18:25.536: INFO: Created: latency-svc-bxtgl
Feb 11 12:18:25.568: INFO: Got endpoints: latency-svc-bxtgl [3.040996262s]
Feb 11 12:18:25.716: INFO: Created: latency-svc-r47hx
Feb 11 12:18:25.741: INFO: Got endpoints: latency-svc-r47hx [3.050938331s]
Feb 11 12:18:25.796: INFO: Created: latency-svc-4x2rm
Feb 11 12:18:25.919: INFO: Got endpoints: latency-svc-4x2rm [2.979840745s]
Feb 11 12:18:25.949: INFO: Created: latency-svc-nrrvj
Feb 11 12:18:25.973: INFO: Got endpoints: latency-svc-nrrvj [2.842069017s]
Feb 11 12:18:26.012: INFO: Created: latency-svc-cwflp
Feb 11 12:18:26.106: INFO: Got endpoints: latency-svc-cwflp [2.936052474s]
Feb 11 12:18:26.130: INFO: Created: latency-svc-z5ltx
Feb 11 12:18:26.141: INFO: Got endpoints: latency-svc-z5ltx [2.711030194s]
Feb 11 12:18:26.229: INFO: Created: latency-svc-r2p9k
Feb 11 12:18:26.354: INFO: Created: latency-svc-98xwx
Feb 11 12:18:26.370: INFO: Got endpoints: latency-svc-r2p9k [2.86998734s]
Feb 11 12:18:26.385: INFO: Got endpoints: latency-svc-98xwx [2.190979954s]
Feb 11 12:18:26.479: INFO: Created: latency-svc-9wq2x
Feb 11 12:18:26.507: INFO: Got endpoints: latency-svc-9wq2x [2.109337215s]
Feb 11 12:18:26.574: INFO: Created: latency-svc-9k828
Feb 11 12:18:26.672: INFO: Got endpoints: latency-svc-9k828 [2.036113667s]
Feb 11 12:18:26.697: INFO: Created: latency-svc-g5r2p
Feb 11 12:18:26.722: INFO: Got endpoints: latency-svc-g5r2p [1.886003884s]
Feb 11 12:18:26.790: INFO: Created: latency-svc-hgswh
Feb 11 12:18:26.869: INFO: Created: latency-svc-jnhpv
Feb 11 12:18:26.884: INFO: Got endpoints: latency-svc-hgswh [1.854515446s]
Feb 11 12:18:26.884: INFO: Got endpoints: latency-svc-jnhpv [1.664821774s]
Feb 11 12:18:26.927: INFO: Created: latency-svc-mmsvv
Feb 11 12:18:27.035: INFO: Got endpoints: latency-svc-mmsvv [1.775143485s]
Feb 11 12:18:27.047: INFO: Created: latency-svc-g4pqf
Feb 11 12:18:27.063: INFO: Got endpoints: latency-svc-g4pqf [1.666402918s]
Feb 11 12:18:27.116: INFO: Created: latency-svc-q7mrr
Feb 11 12:18:27.231: INFO: Created: latency-svc-rv4h7
Feb 11 12:18:27.253: INFO: Got endpoints: latency-svc-q7mrr [1.685218525s]
Feb 11 12:18:27.259: INFO: Got endpoints: latency-svc-rv4h7 [1.517658369s]
Feb 11 12:18:27.295: INFO: Created: latency-svc-bmpph
Feb 11 12:18:27.560: INFO: Got endpoints: latency-svc-bmpph [1.640480864s]
Feb 11 12:18:27.614: INFO: Created: latency-svc-gpjr8
Feb 11 12:18:27.624: INFO: Got endpoints: latency-svc-gpjr8 [1.650679131s]
Feb 11 12:18:27.764: INFO: Created: latency-svc-fn7ls
Feb 11 12:18:27.812: INFO: Got endpoints: latency-svc-fn7ls [1.705686428s]
Feb 11 12:18:27.952: INFO: Created: latency-svc-s99hj
Feb 11 12:18:27.979: INFO: Got endpoints: latency-svc-s99hj [1.838008969s]
Feb 11 12:18:28.138: INFO: Created: latency-svc-nhlf8
Feb 11 12:18:28.154: INFO: Got endpoints: latency-svc-nhlf8 [1.784086758s]
Feb 11 12:18:28.309: INFO: Created: latency-svc-796vg
Feb 11 12:18:28.341: INFO: Got endpoints: latency-svc-796vg [1.955736599s]
Feb 11 12:18:28.383: INFO: Created: latency-svc-nt4gl
Feb 11 12:18:28.454: INFO: Got endpoints: latency-svc-nt4gl [1.94662273s]
Feb 11 12:18:28.524: INFO: Created: latency-svc-g888t
Feb 11 12:18:28.524: INFO: Got endpoints: latency-svc-g888t [1.852581865s]
Feb 11 12:18:28.686: INFO: Created: latency-svc-b754w
Feb 11 12:18:28.703: INFO: Got endpoints: latency-svc-b754w [1.980006628s]
Feb 11 12:18:28.753: INFO: Created: latency-svc-vvnzc
Feb 11 12:18:28.808: INFO: Got endpoints: latency-svc-vvnzc [1.923388957s]
Feb 11 12:18:28.848: INFO: Created: latency-svc-lfc5r
Feb 11 12:18:28.858: INFO: Got endpoints: latency-svc-lfc5r [1.973920005s]
Feb 11 12:18:29.041: INFO: Created: latency-svc-l4drh
Feb 11 12:18:29.061: INFO: Created: latency-svc-5v6g5
Feb 11 12:18:29.061: INFO: Got endpoints: latency-svc-l4drh [2.026264006s]
Feb 11 12:18:29.070: INFO: Got endpoints: latency-svc-5v6g5 [2.006731809s]
Feb 11 12:18:29.125: INFO: Created: latency-svc-l968f
Feb 11 12:18:29.564: INFO: Got endpoints: latency-svc-l968f [2.304937169s]
Feb 11 12:18:29.604: INFO: Created: latency-svc-xzptn
Feb 11 12:18:29.616: INFO: Got endpoints: latency-svc-xzptn [2.362299148s]
Feb 11 12:18:29.810: INFO: Created: latency-svc-rxbtf
Feb 11 12:18:29.819: INFO: Got endpoints: latency-svc-rxbtf [2.258280572s]
Feb 11 12:18:29.881: INFO: Created: latency-svc-8b5pp
Feb 11 12:18:30.022: INFO: Got endpoints: latency-svc-8b5pp [2.397539997s]
Feb 11 12:18:30.063: INFO: Created: latency-svc-zmkrp
Feb 11 12:18:30.087: INFO: Got endpoints: latency-svc-zmkrp [2.274206126s]
Feb 11 12:18:30.338: INFO: Created: latency-svc-pxr67
Feb 11 12:18:30.618: INFO: Got endpoints: latency-svc-pxr67 [2.638482705s]
Feb 11 12:18:30.632: INFO: Created: latency-svc-dhtq9
Feb 11 12:18:30.919: INFO: Got endpoints: latency-svc-dhtq9 [2.764806404s]
Feb 11 12:18:31.133: INFO: Created: latency-svc-bjtdv
Feb 11 12:18:31.198: INFO: Got endpoints: latency-svc-bjtdv [2.856528581s]
Feb 11 12:18:31.348: INFO: Created: latency-svc-wzkzj
Feb 11 12:18:31.359: INFO: Got endpoints: latency-svc-wzkzj [2.904229826s]
Feb 11 12:18:31.536: INFO: Created: latency-svc-9nhnv
Feb 11 12:18:31.560: INFO: Got endpoints: latency-svc-9nhnv [3.035978559s]
Feb 11 12:18:31.639: INFO: Created: latency-svc-gpqqs
Feb 11 12:18:31.667: INFO: Got endpoints: latency-svc-gpqqs [2.964381875s]
Feb 11 12:18:31.688: INFO: Created: latency-svc-64xwk
Feb 11 12:18:31.697: INFO: Got endpoints: latency-svc-64xwk [2.888824147s]
Feb 11 12:18:31.799: INFO: Created: latency-svc-4c29p
Feb 11 12:18:31.829: INFO: Got endpoints: latency-svc-4c29p [2.970232824s]
Feb 11 12:18:31.859: INFO: Created: latency-svc-td29j
Feb 11 12:18:31.878: INFO: Got endpoints: latency-svc-td29j [2.815992917s]
Feb 11 12:18:31.976: INFO: Created: latency-svc-hd7dd
Feb 11 12:18:31.996: INFO: Got endpoints: latency-svc-hd7dd [2.92482819s]
Feb 11 12:18:31.996: INFO: Latencies: [215.551686ms 329.587783ms 364.868916ms 407.029817ms 561.940238ms 615.347508ms 1.241645806s 1.517658369s 1.529767789s 1.640480864s 1.650679131s 1.664821774s 1.666402918s 1.685218525s 1.705686428s 1.72208839s 1.775143485s 1.784086758s 1.838008969s 1.852581865s 1.854515446s 1.886003884s 1.923388957s 1.94662273s 1.955736599s 1.962910582s 1.973920005s 1.980006628s 1.995277384s 1.995430018s 1.996634163s 2.006731809s 2.024565787s 2.026264006s 2.036113667s 2.039159587s 2.053619736s 2.06172899s 2.062802964s 2.068413191s 2.072806024s 2.079887032s 2.083149022s 2.086701252s 2.095597554s 2.095793601s 2.109337215s 2.127947684s 2.133913958s 2.134755222s 2.144534912s 2.15357521s 2.175196324s 2.187358742s 2.190979954s 2.193262834s 2.193418707s 2.195229768s 2.196570679s 2.212368142s 2.224930162s 2.225776346s 2.23533844s 2.24030559s 2.2410948s 2.251867061s 2.252691408s 2.258280572s 2.260049822s 2.26116324s 2.274206126s 2.281859277s 2.295007288s 2.304937169s 2.311264081s 2.313864243s 2.321030456s 2.325415682s 2.330046648s 2.337280855s 2.345142579s 2.355103611s 2.35515704s 2.360098567s 2.362299148s 2.376281539s 2.389038115s 2.397539997s 2.400562263s 2.402857029s 2.413693713s 2.414353262s 2.426829224s 2.427185289s 2.432465768s 2.461257657s 2.46718457s 2.468530929s 2.48411359s 2.488731782s 2.4901918s 2.494832668s 2.498165797s 2.507830056s 2.510139817s 2.521430131s 2.526815591s 2.53246942s 2.536283298s 2.53720116s 2.550449255s 2.551223818s 2.552477861s 2.565775948s 2.627687631s 2.631903805s 2.638482705s 2.676137821s 2.689302463s 2.708408053s 2.711030194s 2.721019691s 2.725237954s 2.74411261s 2.745328345s 2.764806404s 2.768887004s 2.776044479s 2.788235182s 2.804562143s 2.814481457s 2.814505373s 2.815992917s 2.829824316s 2.842069017s 2.856528581s 2.86998734s 2.871435412s 2.888824147s 2.904229826s 2.92482819s 2.928050136s 2.936052474s 2.953975635s 2.964381875s 2.970232824s 2.972064411s 2.973141637s 2.979840745s 3.002972482s 3.007105783s 3.031175695s 3.03286753s 3.033059693s 3.035978559s 3.040996262s 3.050938331s 3.06136477s 3.063596983s 3.092603777s 3.098749869s 3.117613769s 3.127981089s 3.156302819s 3.199778025s 3.232695465s 3.266096224s 3.271180335s 3.284048276s 3.290244894s 3.302079657s 3.322770612s 3.34285008s 3.353644906s 3.369310262s 3.378582986s 3.391060902s 3.395710053s 3.396124142s 3.412329744s 3.438934954s 3.483958347s 3.509552139s 3.545462735s 3.667285975s 3.706115292s 3.800528167s 3.810812677s 3.829893227s 3.842755688s 3.848169091s 3.862338632s 3.871062664s 3.882982427s 3.965776579s 4.238063777s 4.378006338s 4.461526974s 4.499169002s 4.510849616s]
Feb 11 12:18:31.996: INFO: 50 %ile: 2.4901918s
Feb 11 12:18:31.996: INFO: 90 %ile: 3.438934954s
Feb 11 12:18:31.996: INFO: 99 %ile: 4.499169002s
Feb 11 12:18:31.996: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:18:31.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-pm7rv" for this suite.
Feb 11 12:19:28.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:19:28.116: INFO: namespace: e2e-tests-svc-latency-pm7rv, resource: bindings, ignored listing per whitelist
Feb 11 12:19:28.173: INFO: namespace e2e-tests-svc-latency-pm7rv deletion completed in 56.165563099s

• [SLOW TEST:104.418 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:19:28.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:19:28.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:19:38.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pg9n4" for this suite.
Feb 11 12:20:24.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:20:24.627: INFO: namespace: e2e-tests-pods-pg9n4, resource: bindings, ignored listing per whitelist
Feb 11 12:20:24.701: INFO: namespace e2e-tests-pods-pg9n4 deletion completed in 46.251349778s

• [SLOW TEST:56.528 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:20:24.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 11 12:20:25.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e191521a-4cc8-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-mcsfh" to be "success or failure"
Feb 11 12:20:25.094: INFO: Pod "downwardapi-volume-e191521a-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.008346ms
Feb 11 12:20:27.107: INFO: Pod "downwardapi-volume-e191521a-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029945295s
Feb 11 12:20:29.124: INFO: Pod "downwardapi-volume-e191521a-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046718028s
Feb 11 12:20:31.170: INFO: Pod "downwardapi-volume-e191521a-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092617111s
Feb 11 12:20:33.184: INFO: Pod "downwardapi-volume-e191521a-4cc8-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106403978s
Feb 11 12:20:35.201: INFO: Pod "downwardapi-volume-e191521a-4cc8-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1239234s
STEP: Saw pod success
Feb 11 12:20:35.201: INFO: Pod "downwardapi-volume-e191521a-4cc8-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:20:35.213: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e191521a-4cc8-11ea-a6e3-0242ac110005 container client-container: 
STEP: delete the pod
Feb 11 12:20:36.051: INFO: Waiting for pod downwardapi-volume-e191521a-4cc8-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:20:36.074: INFO: Pod downwardapi-volume-e191521a-4cc8-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:20:36.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mcsfh" for this suite.
Feb 11 12:20:42.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:20:42.265: INFO: namespace: e2e-tests-downward-api-mcsfh, resource: bindings, ignored listing per whitelist
Feb 11 12:20:42.371: INFO: namespace e2e-tests-downward-api-mcsfh deletion completed in 6.278563045s

• [SLOW TEST:17.669 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:20:42.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:20:42.819: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb 11 12:20:42.829: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hvhtz/daemonsets","resourceVersion":"21309939"},"items":null}

Feb 11 12:20:42.832: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hvhtz/pods","resourceVersion":"21309939"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:20:42.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-hvhtz" for this suite.
Feb 11 12:20:48.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:20:49.014: INFO: namespace: e2e-tests-daemonsets-hvhtz, resource: bindings, ignored listing per whitelist
Feb 11 12:20:49.106: INFO: namespace e2e-tests-daemonsets-hvhtz deletion completed in 6.254075603s

S [SKIPPING] [6.734 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb 11 12:20:42.819: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:20:49.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb 11 12:20:49.326: INFO: namespace e2e-tests-kubectl-xj6t5
Feb 11 12:20:49.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xj6t5'
Feb 11 12:20:52.927: INFO: stderr: ""
Feb 11 12:20:52.928: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 11 12:20:53.986: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:20:53.986: INFO: Found 0 / 1
Feb 11 12:20:54.993: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:20:54.993: INFO: Found 0 / 1
Feb 11 12:20:56.332: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:20:56.333: INFO: Found 0 / 1
Feb 11 12:20:57.039: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:20:57.039: INFO: Found 0 / 1
Feb 11 12:20:57.952: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:20:57.952: INFO: Found 0 / 1
Feb 11 12:20:58.960: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:20:58.961: INFO: Found 0 / 1
Feb 11 12:21:00.391: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:21:00.391: INFO: Found 0 / 1
Feb 11 12:21:01.493: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:21:01.493: INFO: Found 0 / 1
Feb 11 12:21:02.024: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:21:02.024: INFO: Found 0 / 1
Feb 11 12:21:02.957: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:21:02.957: INFO: Found 0 / 1
Feb 11 12:21:03.973: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:21:03.974: INFO: Found 0 / 1
Feb 11 12:21:04.964: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:21:04.964: INFO: Found 1 / 1
Feb 11 12:21:04.965: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 11 12:21:04.975: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 12:21:04.975: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 11 12:21:04.975: INFO: wait on redis-master startup in e2e-tests-kubectl-xj6t5 
Feb 11 12:21:04.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-r4vnl redis-master --namespace=e2e-tests-kubectl-xj6t5'
Feb 11 12:21:05.275: INFO: stderr: ""
Feb 11 12:21:05.275: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 11 Feb 12:21:02.770 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Feb 12:21:02.770 # Server started, Redis version 3.2.12\n1:M 11 Feb 12:21:02.770 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Feb 12:21:02.770 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 11 12:21:05.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-xj6t5'
Feb 11 12:21:05.524: INFO: stderr: ""
Feb 11 12:21:05.524: INFO: stdout: "service/rm2 exposed\n"
Feb 11 12:21:05.533: INFO: Service rm2 in namespace e2e-tests-kubectl-xj6t5 found.
STEP: exposing service
Feb 11 12:21:07.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-xj6t5'
Feb 11 12:21:07.950: INFO: stderr: ""
Feb 11 12:21:07.950: INFO: stdout: "service/rm3 exposed\n"
Feb 11 12:21:07.964: INFO: Service rm3 in namespace e2e-tests-kubectl-xj6t5 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:21:09.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xj6t5" for this suite.
Feb 11 12:21:34.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:21:34.250: INFO: namespace: e2e-tests-kubectl-xj6t5, resource: bindings, ignored listing per whitelist
Feb 11 12:21:34.271: INFO: namespace e2e-tests-kubectl-xj6t5 deletion completed in 24.267535786s

• [SLOW TEST:45.164 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:21:34.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Feb 11 12:21:34.570: INFO: Waiting up to 5m0s for pod "client-containers-0b0448c9-4cc9-11ea-a6e3-0242ac110005" in namespace "e2e-tests-containers-dwnfw" to be "success or failure"
Feb 11 12:21:34.580: INFO: Pod "client-containers-0b0448c9-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093849ms
Feb 11 12:21:36.660: INFO: Pod "client-containers-0b0448c9-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089906321s
Feb 11 12:21:38.713: INFO: Pod "client-containers-0b0448c9-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142508648s
Feb 11 12:21:40.728: INFO: Pod "client-containers-0b0448c9-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157716571s
Feb 11 12:21:42.749: INFO: Pod "client-containers-0b0448c9-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178954887s
Feb 11 12:21:44.762: INFO: Pod "client-containers-0b0448c9-4cc9-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.191572565s
STEP: Saw pod success
Feb 11 12:21:44.762: INFO: Pod "client-containers-0b0448c9-4cc9-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:21:44.769: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-0b0448c9-4cc9-11ea-a6e3-0242ac110005 container test-container: 
STEP: delete the pod
Feb 11 12:21:44.983: INFO: Waiting for pod client-containers-0b0448c9-4cc9-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:21:45.016: INFO: Pod client-containers-0b0448c9-4cc9-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:21:45.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-dwnfw" for this suite.
Feb 11 12:21:52.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:21:52.356: INFO: namespace: e2e-tests-containers-dwnfw, resource: bindings, ignored listing per whitelist
Feb 11 12:21:52.577: INFO: namespace e2e-tests-containers-dwnfw deletion completed in 7.517485145s

• [SLOW TEST:18.305 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:21:52.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:22:18.937: INFO: Container started at 2020-02-11 12:22:01 +0000 UTC, pod became ready at 2020-02-11 12:22:17 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:22:18.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-k67dv" for this suite.
Feb 11 12:22:45.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:22:45.379: INFO: namespace: e2e-tests-container-probe-k67dv, resource: bindings, ignored listing per whitelist
Feb 11 12:22:45.381: INFO: namespace e2e-tests-container-probe-k67dv deletion completed in 26.434501546s

• [SLOW TEST:52.803 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:22:45.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 11 12:22:45.625: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3557593e-4cc9-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-lbvrf" to be "success or failure"
Feb 11 12:22:45.661: INFO: Pod "downwardapi-volume-3557593e-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.695723ms
Feb 11 12:22:47.711: INFO: Pod "downwardapi-volume-3557593e-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085183745s
Feb 11 12:22:49.724: INFO: Pod "downwardapi-volume-3557593e-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098291696s
Feb 11 12:22:51.810: INFO: Pod "downwardapi-volume-3557593e-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184422806s
Feb 11 12:22:53.836: INFO: Pod "downwardapi-volume-3557593e-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.209673377s
Feb 11 12:22:55.973: INFO: Pod "downwardapi-volume-3557593e-4cc9-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.347449698s
STEP: Saw pod success
Feb 11 12:22:55.973: INFO: Pod "downwardapi-volume-3557593e-4cc9-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:22:55.984: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3557593e-4cc9-11ea-a6e3-0242ac110005 container client-container: 
STEP: delete the pod
Feb 11 12:22:56.170: INFO: Waiting for pod downwardapi-volume-3557593e-4cc9-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:22:56.219: INFO: Pod downwardapi-volume-3557593e-4cc9-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:22:56.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lbvrf" for this suite.
Feb 11 12:23:02.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:23:02.657: INFO: namespace: e2e-tests-downward-api-lbvrf, resource: bindings, ignored listing per whitelist
Feb 11 12:23:02.789: INFO: namespace e2e-tests-downward-api-lbvrf deletion completed in 6.542670014s

• [SLOW TEST:17.407 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:23:02.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:23:03.018: INFO: Creating ReplicaSet my-hostname-basic-3fc44f38-4cc9-11ea-a6e3-0242ac110005
Feb 11 12:23:03.054: INFO: Pod name my-hostname-basic-3fc44f38-4cc9-11ea-a6e3-0242ac110005: Found 0 pods out of 1
Feb 11 12:23:08.368: INFO: Pod name my-hostname-basic-3fc44f38-4cc9-11ea-a6e3-0242ac110005: Found 1 pods out of 1
Feb 11 12:23:08.368: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-3fc44f38-4cc9-11ea-a6e3-0242ac110005" is running
Feb 11 12:23:12.930: INFO: Pod "my-hostname-basic-3fc44f38-4cc9-11ea-a6e3-0242ac110005-9cbx8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 12:23:03 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 12:23:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3fc44f38-4cc9-11ea-a6e3-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 12:23:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3fc44f38-4cc9-11ea-a6e3-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 12:23:03 +0000 UTC Reason: Message:}])
Feb 11 12:23:12.930: INFO: Trying to dial the pod
Feb 11 12:23:17.984: INFO: Controller my-hostname-basic-3fc44f38-4cc9-11ea-a6e3-0242ac110005: Got expected result from replica 1 [my-hostname-basic-3fc44f38-4cc9-11ea-a6e3-0242ac110005-9cbx8]: "my-hostname-basic-3fc44f38-4cc9-11ea-a6e3-0242ac110005-9cbx8", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:23:17.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-n4jz9" for this suite.
Feb 11 12:23:24.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:23:24.170: INFO: namespace: e2e-tests-replicaset-n4jz9, resource: bindings, ignored listing per whitelist
Feb 11 12:23:24.272: INFO: namespace e2e-tests-replicaset-n4jz9 deletion completed in 6.279187763s

• [SLOW TEST:21.483 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:23:24.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-4c864892-4cc9-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 11 12:23:24.449: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-nkmkl" to be "success or failure"
Feb 11 12:23:24.472: INFO: Pod "pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.241244ms
Feb 11 12:23:26.521: INFO: Pod "pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071615866s
Feb 11 12:23:28.642: INFO: Pod "pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192474277s
Feb 11 12:23:30.657: INFO: Pod "pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207666185s
Feb 11 12:23:32.707: INFO: Pod "pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257989831s
Feb 11 12:23:34.721: INFO: Pod "pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.271861484s
Feb 11 12:23:36.736: INFO: Pod "pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.286459385s
STEP: Saw pod success
Feb 11 12:23:36.736: INFO: Pod "pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:23:36.740: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb 11 12:23:37.250: INFO: Waiting for pod pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:23:37.605: INFO: Pod pod-projected-secrets-4c870f13-4cc9-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:23:37.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nkmkl" for this suite.
Feb 11 12:23:43.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:23:43.951: INFO: namespace: e2e-tests-projected-nkmkl, resource: bindings, ignored listing per whitelist
Feb 11 12:23:43.975: INFO: namespace e2e-tests-projected-nkmkl deletion completed in 6.344489951s

• [SLOW TEST:19.702 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:23:43.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xhmk6
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 11 12:23:44.467: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 11 12:24:26.810: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-xhmk6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 12:24:26.812: INFO: >>> kubeConfig: /root/.kube/config
I0211 12:24:26.871610       9 log.go:172] (0xc0003dba20) (0xc001f105a0) Create stream
I0211 12:24:26.871751       9 log.go:172] (0xc0003dba20) (0xc001f105a0) Stream added, broadcasting: 1
I0211 12:24:26.876588       9 log.go:172] (0xc0003dba20) Reply frame received for 1
I0211 12:24:26.876723       9 log.go:172] (0xc0003dba20) (0xc001fa8000) Create stream
I0211 12:24:26.876737       9 log.go:172] (0xc0003dba20) (0xc001fa8000) Stream added, broadcasting: 3
I0211 12:24:26.877846       9 log.go:172] (0xc0003dba20) Reply frame received for 3
I0211 12:24:26.877870       9 log.go:172] (0xc0003dba20) (0xc001c960a0) Create stream
I0211 12:24:26.877880       9 log.go:172] (0xc0003dba20) (0xc001c960a0) Stream added, broadcasting: 5
I0211 12:24:26.878823       9 log.go:172] (0xc0003dba20) Reply frame received for 5
I0211 12:24:27.071677       9 log.go:172] (0xc0003dba20) Data frame received for 3
I0211 12:24:27.071850       9 log.go:172] (0xc001fa8000) (3) Data frame handling
I0211 12:24:27.071874       9 log.go:172] (0xc001fa8000) (3) Data frame sent
I0211 12:24:27.210898       9 log.go:172] (0xc0003dba20) Data frame received for 1
I0211 12:24:27.211079       9 log.go:172] (0xc0003dba20) (0xc001fa8000) Stream removed, broadcasting: 3
I0211 12:24:27.211322       9 log.go:172] (0xc0003dba20) (0xc001c960a0) Stream removed, broadcasting: 5
I0211 12:24:27.211566       9 log.go:172] (0xc001f105a0) (1) Data frame handling
I0211 12:24:27.211663       9 log.go:172] (0xc001f105a0) (1) Data frame sent
I0211 12:24:27.211682       9 log.go:172] (0xc0003dba20) (0xc001f105a0) Stream removed, broadcasting: 1
I0211 12:24:27.211706       9 log.go:172] (0xc0003dba20) Go away received
I0211 12:24:27.212064       9 log.go:172] (0xc0003dba20) (0xc001f105a0) Stream removed, broadcasting: 1
I0211 12:24:27.212087       9 log.go:172] (0xc0003dba20) (0xc001fa8000) Stream removed, broadcasting: 3
I0211 12:24:27.212100       9 log.go:172] (0xc0003dba20) (0xc001c960a0) Stream removed, broadcasting: 5
Feb 11 12:24:27.212: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:24:27.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-xhmk6" for this suite.
Feb 11 12:24:48.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:24:49.034: INFO: namespace: e2e-tests-pod-network-test-xhmk6, resource: bindings, ignored listing per whitelist
Feb 11 12:24:49.085: INFO: namespace e2e-tests-pod-network-test-xhmk6 deletion completed in 21.858028834s

• [SLOW TEST:65.110 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:24:49.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 11 12:24:49.312: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:25:12.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-pdf2m" for this suite.
Feb 11 12:25:37.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:25:37.159: INFO: namespace: e2e-tests-init-container-pdf2m, resource: bindings, ignored listing per whitelist
Feb 11 12:25:37.268: INFO: namespace e2e-tests-init-container-pdf2m deletion completed in 24.236223326s

• [SLOW TEST:48.182 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:25:37.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-9bcd4f8f-4cc9-11ea-a6e3-0242ac110005
STEP: Creating secret with name s-test-opt-upd-9bcd50cc-4cc9-11ea-a6e3-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9bcd4f8f-4cc9-11ea-a6e3-0242ac110005
STEP: Updating secret s-test-opt-upd-9bcd50cc-4cc9-11ea-a6e3-0242ac110005
STEP: Creating secret with name s-test-opt-create-9bcd50f6-4cc9-11ea-a6e3-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:27:08.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-82m8r" for this suite.
Feb 11 12:27:32.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:27:32.552: INFO: namespace: e2e-tests-projected-82m8r, resource: bindings, ignored listing per whitelist
Feb 11 12:27:32.669: INFO: namespace e2e-tests-projected-82m8r deletion completed in 24.206175057s

• [SLOW TEST:115.400 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:27:32.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-e0a324d7-4cc9-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 11 12:27:32.947: INFO: Waiting up to 5m0s for pod "pod-secrets-e0a48041-4cc9-11ea-a6e3-0242ac110005" in namespace "e2e-tests-secrets-zr42v" to be "success or failure"
Feb 11 12:27:32.965: INFO: Pod "pod-secrets-e0a48041-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.656187ms
Feb 11 12:27:35.256: INFO: Pod "pod-secrets-e0a48041-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308587397s
Feb 11 12:27:37.281: INFO: Pod "pod-secrets-e0a48041-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333549453s
Feb 11 12:27:39.385: INFO: Pod "pod-secrets-e0a48041-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437900459s
Feb 11 12:27:41.402: INFO: Pod "pod-secrets-e0a48041-4cc9-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.454633171s
Feb 11 12:27:43.423: INFO: Pod "pod-secrets-e0a48041-4cc9-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.475232793s
STEP: Saw pod success
Feb 11 12:27:43.423: INFO: Pod "pod-secrets-e0a48041-4cc9-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:27:43.437: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e0a48041-4cc9-11ea-a6e3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb 11 12:27:43.854: INFO: Waiting for pod pod-secrets-e0a48041-4cc9-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:27:43.940: INFO: Pod pod-secrets-e0a48041-4cc9-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:27:43.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zr42v" for this suite.
Feb 11 12:27:49.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:27:50.078: INFO: namespace: e2e-tests-secrets-zr42v, resource: bindings, ignored listing per whitelist
Feb 11 12:27:50.145: INFO: namespace e2e-tests-secrets-zr42v deletion completed in 6.192307124s

• [SLOW TEST:17.475 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:27:50.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 11 12:28:01.279: INFO: Successfully updated pod "pod-update-eb1257e3-4cc9-11ea-a6e3-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Feb 11 12:28:01.389: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:28:01.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jvxnr" for this suite.
Feb 11 12:28:25.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:28:25.689: INFO: namespace: e2e-tests-pods-jvxnr, resource: bindings, ignored listing per whitelist
Feb 11 12:28:25.701: INFO: namespace e2e-tests-pods-jvxnr deletion completed in 24.295285652s

• [SLOW TEST:35.556 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:28:25.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-scwz8
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-scwz8
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-scwz8
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-scwz8
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-scwz8
Feb 11 12:28:40.141: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-scwz8, name: ss-0, uid: 0527e74a-4cca-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb 11 12:28:42.496: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-scwz8, name: ss-0, uid: 0527e74a-4cca-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 11 12:28:42.634: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-scwz8, name: ss-0, uid: 0527e74a-4cca-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 11 12:28:42.651: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-scwz8
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-scwz8
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-scwz8 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 11 12:28:56.973: INFO: Deleting all statefulset in ns e2e-tests-statefulset-scwz8
Feb 11 12:28:56.981: INFO: Scaling statefulset ss to 0
Feb 11 12:29:07.030: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 12:29:07.037: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:29:07.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-scwz8" for this suite.
Feb 11 12:29:13.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:29:13.265: INFO: namespace: e2e-tests-statefulset-scwz8, resource: bindings, ignored listing per whitelist
Feb 11 12:29:13.335: INFO: namespace e2e-tests-statefulset-scwz8 deletion completed in 6.24583881s

• [SLOW TEST:47.633 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:29:13.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:29:13.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-k49d2" for this suite.
Feb 11 12:29:19.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:29:19.873: INFO: namespace: e2e-tests-services-k49d2, resource: bindings, ignored listing per whitelist
Feb 11 12:29:19.960: INFO: namespace e2e-tests-services-k49d2 deletion completed in 6.421420567s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.625 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:29:19.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 11 12:32:23.446: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:23.500: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:25.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:25.528: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:27.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:27.516: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:29.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:29.538: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:31.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:31.521: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:33.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:33.516: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:35.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:35.514: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:37.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:37.524: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:39.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:39.518: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:41.501: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:41.516: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:43.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:43.550: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:45.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:45.589: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:47.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:47.584: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:49.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:49.514: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:51.501: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:51.553: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:53.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:53.524: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:55.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:55.516: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:57.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:57.518: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:32:59.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:32:59.515: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:01.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:01.523: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:03.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:03.520: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:05.501: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:05.528: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:07.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:07.518: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:09.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:09.519: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:11.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:11.561: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:13.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:13.521: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:15.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:15.517: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:17.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:17.515: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:19.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:19.521: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:21.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:21.530: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:23.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:23.521: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:25.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:25.541: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:27.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:27.514: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:29.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:29.523: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:31.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:31.520: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:33.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:33.517: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:35.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:35.553: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:37.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:37.514: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:39.501: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:39.551: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:41.501: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:42.724: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:43.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:43.510: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:45.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:45.550: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:47.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:47.514: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:49.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:49.542: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:51.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:51.525: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:53.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:53.518: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:55.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:55.546: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:57.501: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:57.521: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:33:59.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:33:59.521: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:01.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:01.527: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:03.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:03.519: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:05.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:05.521: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:07.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:07.515: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:09.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:09.526: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:11.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:11.528: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:13.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:13.575: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:15.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:15.522: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:17.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:17.545: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:19.501: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:19.534: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:21.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:21.533: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 12:34:23.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 12:34:23.550: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:34:23.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-2657f" for this suite.
Feb 11 12:34:47.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:34:47.654: INFO: namespace: e2e-tests-container-lifecycle-hook-2657f, resource: bindings, ignored listing per whitelist
Feb 11 12:34:47.804: INFO: namespace e2e-tests-container-lifecycle-hook-2657f deletion completed in 24.239662698s

• [SLOW TEST:327.843 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:34:47.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 11 12:34:58.232: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-e4050a2a-4cca-11ea-a6e3-0242ac110005,GenerateName:,Namespace:e2e-tests-events-ff8k8,SelfLink:/api/v1/namespaces/e2e-tests-events-ff8k8/pods/send-events-e4050a2a-4cca-11ea-a6e3-0242ac110005,UID:e405e3b7-4cca-11ea-a994-fa163e34d433,ResourceVersion:21311510,Generation:0,CreationTimestamp:2020-02-11 12:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 86253789,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-f2sm2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f2sm2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-f2sm2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00159b400} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00159b5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:34:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:34:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:34:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:34:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-11 12:34:48 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-11 12:34:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://cad3bb2b039ec313b3decac25eba830cabf3712f301baddbb091839fcc24e516}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 11 12:35:00.250: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 11 12:35:02.268: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:35:02.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-ff8k8" for this suite.
Feb 11 12:35:42.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:35:42.700: INFO: namespace: e2e-tests-events-ff8k8, resource: bindings, ignored listing per whitelist
Feb 11 12:35:42.714: INFO: namespace e2e-tests-events-ff8k8 deletion completed in 40.381154212s

• [SLOW TEST:54.909 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:35:42.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 11 12:35:42.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04b22741-4ccb-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-bd6xk" to be "success or failure"
Feb 11 12:35:43.037: INFO: Pod "downwardapi-volume-04b22741-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 115.461199ms
Feb 11 12:35:45.233: INFO: Pod "downwardapi-volume-04b22741-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311366131s
Feb 11 12:35:47.249: INFO: Pod "downwardapi-volume-04b22741-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327200827s
Feb 11 12:35:49.261: INFO: Pod "downwardapi-volume-04b22741-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338701099s
Feb 11 12:35:51.321: INFO: Pod "downwardapi-volume-04b22741-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.398890213s
Feb 11 12:35:53.345: INFO: Pod "downwardapi-volume-04b22741-4ccb-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.423288836s
STEP: Saw pod success
Feb 11 12:35:53.345: INFO: Pod "downwardapi-volume-04b22741-4ccb-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:35:53.350: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-04b22741-4ccb-11ea-a6e3-0242ac110005 container client-container: 
STEP: delete the pod
Feb 11 12:35:54.128: INFO: Waiting for pod downwardapi-volume-04b22741-4ccb-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:35:54.153: INFO: Pod downwardapi-volume-04b22741-4ccb-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:35:54.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bd6xk" for this suite.
Feb 11 12:36:00.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:36:00.368: INFO: namespace: e2e-tests-projected-bd6xk, resource: bindings, ignored listing per whitelist
Feb 11 12:36:00.730: INFO: namespace e2e-tests-projected-bd6xk deletion completed in 6.56521761s

• [SLOW TEST:18.017 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:36:00.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 11 12:36:00.993: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f76ba4a-4ccb-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-wfd5q" to be "success or failure"
Feb 11 12:36:01.057: INFO: Pod "downwardapi-volume-0f76ba4a-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.168243ms
Feb 11 12:36:03.336: INFO: Pod "downwardapi-volume-0f76ba4a-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342318415s
Feb 11 12:36:05.355: INFO: Pod "downwardapi-volume-0f76ba4a-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362016615s
Feb 11 12:36:07.376: INFO: Pod "downwardapi-volume-0f76ba4a-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382969071s
Feb 11 12:36:09.390: INFO: Pod "downwardapi-volume-0f76ba4a-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.396496214s
Feb 11 12:36:11.891: INFO: Pod "downwardapi-volume-0f76ba4a-4ccb-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.89811855s
STEP: Saw pod success
Feb 11 12:36:11.892: INFO: Pod "downwardapi-volume-0f76ba4a-4ccb-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:36:11.900: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0f76ba4a-4ccb-11ea-a6e3-0242ac110005 container client-container: 
STEP: delete the pod
Feb 11 12:36:12.240: INFO: Waiting for pod downwardapi-volume-0f76ba4a-4ccb-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:36:12.253: INFO: Pod downwardapi-volume-0f76ba4a-4ccb-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:36:12.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wfd5q" for this suite.
Feb 11 12:36:20.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:36:20.434: INFO: namespace: e2e-tests-downward-api-wfd5q, resource: bindings, ignored listing per whitelist
Feb 11 12:36:20.453: INFO: namespace e2e-tests-downward-api-wfd5q deletion completed in 8.192384435s

• [SLOW TEST:19.722 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:36:20.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 11 12:36:20.819: INFO: Waiting up to 5m0s for pod "pod-1b45de15-4ccb-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-phrdq" to be "success or failure"
Feb 11 12:36:20.842: INFO: Pod "pod-1b45de15-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.34774ms
Feb 11 12:36:22.862: INFO: Pod "pod-1b45de15-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041791865s
Feb 11 12:36:24.877: INFO: Pod "pod-1b45de15-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057376044s
Feb 11 12:36:26.948: INFO: Pod "pod-1b45de15-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128129243s
Feb 11 12:36:29.260: INFO: Pod "pod-1b45de15-4ccb-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.439801794s
Feb 11 12:36:31.404: INFO: Pod "pod-1b45de15-4ccb-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.584579525s
STEP: Saw pod success
Feb 11 12:36:31.405: INFO: Pod "pod-1b45de15-4ccb-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:36:31.414: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1b45de15-4ccb-11ea-a6e3-0242ac110005 container test-container: 
STEP: delete the pod
Feb 11 12:36:31.624: INFO: Waiting for pod pod-1b45de15-4ccb-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:36:31.640: INFO: Pod pod-1b45de15-4ccb-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:36:31.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-phrdq" for this suite.
Feb 11 12:36:37.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:36:37.910: INFO: namespace: e2e-tests-emptydir-phrdq, resource: bindings, ignored listing per whitelist
Feb 11 12:36:38.016: INFO: namespace e2e-tests-emptydir-phrdq deletion completed in 6.361187499s

• [SLOW TEST:17.562 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:36:38.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb 11 12:36:46.485: INFO: Pod pod-hostip-25b20f3a-4ccb-11ea-a6e3-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:36:46.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-qfd6t" for this suite.
Feb 11 12:37:12.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:37:12.934: INFO: namespace: e2e-tests-pods-qfd6t, resource: bindings, ignored listing per whitelist
Feb 11 12:37:12.983: INFO: namespace e2e-tests-pods-qfd6t deletion completed in 26.330618225s

• [SLOW TEST:34.967 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:37:12.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:37:13.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-mlfml" for this suite.
Feb 11 12:37:37.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:37:37.975: INFO: namespace: e2e-tests-pods-mlfml, resource: bindings, ignored listing per whitelist
Feb 11 12:37:38.298: INFO: namespace e2e-tests-pods-mlfml deletion completed in 25.051090957s

• [SLOW TEST:25.314 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:37:38.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-r2jg
STEP: Creating a pod to test atomic-volume-subpath
Feb 11 12:37:38.838: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-r2jg" in namespace "e2e-tests-subpath-lqth9" to be "success or failure"
Feb 11 12:37:38.851: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.165952ms
Feb 11 12:37:40.883: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04503202s
Feb 11 12:37:42.913: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074435616s
Feb 11 12:37:44.929: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090895926s
Feb 11 12:37:46.959: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120294298s
Feb 11 12:37:48.998: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.159091574s
Feb 11 12:37:51.016: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.177754123s
Feb 11 12:37:53.034: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.19553183s
Feb 11 12:37:55.054: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Running", Reason="", readiness=false. Elapsed: 16.21576023s
Feb 11 12:37:57.076: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Running", Reason="", readiness=false. Elapsed: 18.238035787s
Feb 11 12:37:59.097: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Running", Reason="", readiness=false. Elapsed: 20.258902723s
Feb 11 12:38:01.114: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Running", Reason="", readiness=false. Elapsed: 22.275937394s
Feb 11 12:38:03.130: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Running", Reason="", readiness=false. Elapsed: 24.291401213s
Feb 11 12:38:05.165: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Running", Reason="", readiness=false. Elapsed: 26.32668945s
Feb 11 12:38:07.190: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Running", Reason="", readiness=false. Elapsed: 28.351648834s
Feb 11 12:38:09.223: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Running", Reason="", readiness=false. Elapsed: 30.384275573s
Feb 11 12:38:11.251: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Running", Reason="", readiness=false. Elapsed: 32.412368708s
Feb 11 12:38:13.275: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Running", Reason="", readiness=false. Elapsed: 34.436241802s
Feb 11 12:38:15.293: INFO: Pod "pod-subpath-test-downwardapi-r2jg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.454488792s
STEP: Saw pod success
Feb 11 12:38:15.293: INFO: Pod "pod-subpath-test-downwardapi-r2jg" satisfied condition "success or failure"
Feb 11 12:38:15.299: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-r2jg container test-container-subpath-downwardapi-r2jg: 
STEP: delete the pod
Feb 11 12:38:15.383: INFO: Waiting for pod pod-subpath-test-downwardapi-r2jg to disappear
Feb 11 12:38:15.472: INFO: Pod pod-subpath-test-downwardapi-r2jg no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-r2jg
Feb 11 12:38:15.472: INFO: Deleting pod "pod-subpath-test-downwardapi-r2jg" in namespace "e2e-tests-subpath-lqth9"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:38:15.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-lqth9" for this suite.
Feb 11 12:38:21.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:38:21.878: INFO: namespace: e2e-tests-subpath-lqth9, resource: bindings, ignored listing per whitelist
Feb 11 12:38:21.915: INFO: namespace e2e-tests-subpath-lqth9 deletion completed in 6.413281053s

• [SLOW TEST:43.617 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:38:21.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:38:22.852: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 11 12:38:22.996: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 11 12:38:28.276: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 11 12:38:32.309: INFO: Creating deployment "test-rolling-update-deployment"
Feb 11 12:38:32.324: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 11 12:38:32.339: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 11 12:38:34.372: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 11 12:38:34.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 12:38:36.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 12:38:38.401: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 12:38:40.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 12:38:42.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717021512, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 12:38:44.392: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 11 12:38:44.413: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-v28h5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v28h5/deployments/test-rolling-update-deployment,UID:69ab4c87-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21311986,Generation:1,CreationTimestamp:2020-02-11 12:38:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-11 12:38:32 +0000 UTC 2020-02-11 12:38:32 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-11 12:38:42 +0000 UTC 2020-02-11 12:38:32 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 11 12:38:44.420: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-v28h5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v28h5/replicasets/test-rolling-update-deployment-75db98fb4c,UID:69b98955-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21311976,Generation:1,CreationTimestamp:2020-02-11 12:38:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 69ab4c87-4ccb-11ea-a994-fa163e34d433 0xc00214fc97 0xc00214fc98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 11 12:38:44.420: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 11 12:38:44.420: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-v28h5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v28h5/replicasets/test-rolling-update-controller,UID:64094e6a-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21311985,Generation:2,CreationTimestamp:2020-02-11 12:38:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 69ab4c87-4ccb-11ea-a994-fa163e34d433 0xc00214fb07 0xc00214fb08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 11 12:38:44.431: INFO: Pod "test-rolling-update-deployment-75db98fb4c-r7j8t" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-r7j8t,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-v28h5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v28h5/pods/test-rolling-update-deployment-75db98fb4c-r7j8t,UID:69bb3624-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21311975,Generation:0,CreationTimestamp:2020-02-11 12:38:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 69b98955-4ccb-11ea-a994-fa163e34d433 0xc0026eeff7 0xc0026eeff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p8z4d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p8z4d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-p8z4d true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026ef060} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026ef080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:38:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:38:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:38:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:38:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-11 12:38:32 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-11 12:38:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://6f05f4d69b3d111f389449b9fe2dfef9cced22e56980502ad482849c6ae48482}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:38:44.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-v28h5" for this suite.
Feb 11 12:38:52.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:38:52.690: INFO: namespace: e2e-tests-deployment-v28h5, resource: bindings, ignored listing per whitelist
Feb 11 12:38:52.758: INFO: namespace e2e-tests-deployment-v28h5 deletion completed in 8.315548037s

• [SLOW TEST:30.842 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:38:52.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 11 12:38:54.901: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-a,UID:771fcedc-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312034,Generation:0,CreationTimestamp:2020-02-11 12:38:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 11 12:38:54.902: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-a,UID:771fcedc-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312034,Generation:0,CreationTimestamp:2020-02-11 12:38:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 11 12:39:04.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-a,UID:771fcedc-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312047,Generation:0,CreationTimestamp:2020-02-11 12:38:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 11 12:39:04.930: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-a,UID:771fcedc-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312047,Generation:0,CreationTimestamp:2020-02-11 12:38:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 11 12:39:14.970: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-a,UID:771fcedc-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312060,Generation:0,CreationTimestamp:2020-02-11 12:38:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 11 12:39:14.971: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-a,UID:771fcedc-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312060,Generation:0,CreationTimestamp:2020-02-11 12:38:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 11 12:39:24.985: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-a,UID:771fcedc-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312072,Generation:0,CreationTimestamp:2020-02-11 12:38:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 11 12:39:24.985: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-a,UID:771fcedc-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312072,Generation:0,CreationTimestamp:2020-02-11 12:38:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 11 12:39:35.013: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-b,UID:8f07919f-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312084,Generation:0,CreationTimestamp:2020-02-11 12:39:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 11 12:39:35.014: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-b,UID:8f07919f-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312084,Generation:0,CreationTimestamp:2020-02-11 12:39:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 11 12:39:45.033: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-b,UID:8f07919f-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312097,Generation:0,CreationTimestamp:2020-02-11 12:39:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 11 12:39:45.033: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-65gsx,SelfLink:/api/v1/namespaces/e2e-tests-watch-65gsx/configmaps/e2e-watch-test-configmap-b,UID:8f07919f-4ccb-11ea-a994-fa163e34d433,ResourceVersion:21312097,Generation:0,CreationTimestamp:2020-02-11 12:39:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:39:55.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-65gsx" for this suite.
Feb 11 12:40:01.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:40:01.343: INFO: namespace: e2e-tests-watch-65gsx, resource: bindings, ignored listing per whitelist
Feb 11 12:40:01.467: INFO: namespace e2e-tests-watch-65gsx deletion completed in 6.412276152s

• [SLOW TEST:68.709 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:40:01.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:40:01.662: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 11 12:40:01.704: INFO: Number of nodes with available pods: 0
Feb 11 12:40:01.704: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:02.729: INFO: Number of nodes with available pods: 0
Feb 11 12:40:02.729: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:03.749: INFO: Number of nodes with available pods: 0
Feb 11 12:40:03.749: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:04.725: INFO: Number of nodes with available pods: 0
Feb 11 12:40:04.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:05.725: INFO: Number of nodes with available pods: 0
Feb 11 12:40:05.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:07.024: INFO: Number of nodes with available pods: 0
Feb 11 12:40:07.024: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:07.783: INFO: Number of nodes with available pods: 0
Feb 11 12:40:07.783: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:08.826: INFO: Number of nodes with available pods: 0
Feb 11 12:40:08.826: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:09.726: INFO: Number of nodes with available pods: 0
Feb 11 12:40:09.726: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:10.724: INFO: Number of nodes with available pods: 0
Feb 11 12:40:10.724: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:11.728: INFO: Number of nodes with available pods: 1
Feb 11 12:40:11.729: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 11 12:40:11.831: INFO: Wrong image for pod: daemon-set-gr24s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 12:40:12.888: INFO: Wrong image for pod: daemon-set-gr24s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 12:40:13.876: INFO: Wrong image for pod: daemon-set-gr24s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 12:40:14.982: INFO: Wrong image for pod: daemon-set-gr24s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 12:40:15.881: INFO: Wrong image for pod: daemon-set-gr24s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 12:40:18.941: INFO: Wrong image for pod: daemon-set-gr24s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 12:40:19.877: INFO: Wrong image for pod: daemon-set-gr24s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 12:40:20.879: INFO: Wrong image for pod: daemon-set-gr24s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 12:40:20.879: INFO: Pod daemon-set-gr24s is not available
Feb 11 12:40:21.868: INFO: Pod daemon-set-cknvw is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 11 12:40:21.883: INFO: Number of nodes with available pods: 0
Feb 11 12:40:21.883: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:22.958: INFO: Number of nodes with available pods: 0
Feb 11 12:40:22.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:23.913: INFO: Number of nodes with available pods: 0
Feb 11 12:40:23.913: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:24.961: INFO: Number of nodes with available pods: 0
Feb 11 12:40:24.961: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:25.921: INFO: Number of nodes with available pods: 0
Feb 11 12:40:25.922: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:27.083: INFO: Number of nodes with available pods: 0
Feb 11 12:40:27.083: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:27.912: INFO: Number of nodes with available pods: 0
Feb 11 12:40:27.912: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:28.916: INFO: Number of nodes with available pods: 0
Feb 11 12:40:28.916: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:29.907: INFO: Number of nodes with available pods: 0
Feb 11 12:40:29.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:40:30.947: INFO: Number of nodes with available pods: 1
Feb 11 12:40:30.947: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-q8plq, will wait for the garbage collector to delete the pods
Feb 11 12:40:31.035: INFO: Deleting DaemonSet.extensions daemon-set took: 16.777958ms
Feb 11 12:40:31.236: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.416798ms
Feb 11 12:40:42.655: INFO: Number of nodes with available pods: 0
Feb 11 12:40:42.655: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 12:40:42.661: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-q8plq/daemonsets","resourceVersion":"21312217"},"items":null}

Feb 11 12:40:42.670: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-q8plq/pods","resourceVersion":"21312217"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:40:42.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-q8plq" for this suite.
Feb 11 12:40:48.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:40:48.810: INFO: namespace: e2e-tests-daemonsets-q8plq, resource: bindings, ignored listing per whitelist
Feb 11 12:40:48.911: INFO: namespace e2e-tests-daemonsets-q8plq deletion completed in 6.226815724s

• [SLOW TEST:47.444 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:40:48.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-hztdw
Feb 11 12:40:59.365: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-hztdw
STEP: checking the pod's current state and verifying that restartCount is present
Feb 11 12:40:59.373: INFO: Initial restart count of pod liveness-http is 0
Feb 11 12:41:17.697: INFO: Restart count of pod e2e-tests-container-probe-hztdw/liveness-http is now 1 (18.324152491s elapsed)
Feb 11 12:41:40.089: INFO: Restart count of pod e2e-tests-container-probe-hztdw/liveness-http is now 2 (40.715777217s elapsed)
Feb 11 12:41:58.382: INFO: Restart count of pod e2e-tests-container-probe-hztdw/liveness-http is now 3 (59.00861116s elapsed)
Feb 11 12:42:18.681: INFO: Restart count of pod e2e-tests-container-probe-hztdw/liveness-http is now 4 (1m19.307905736s elapsed)
Feb 11 12:43:17.286: INFO: Restart count of pod e2e-tests-container-probe-hztdw/liveness-http is now 5 (2m17.91298789s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:43:17.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-hztdw" for this suite.
Feb 11 12:43:23.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:43:23.575: INFO: namespace: e2e-tests-container-probe-hztdw, resource: bindings, ignored listing per whitelist
Feb 11 12:43:23.728: INFO: namespace e2e-tests-container-probe-hztdw deletion completed in 6.370615766s

• [SLOW TEST:154.817 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:43:23.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb 11 12:43:24.066: INFO: Waiting up to 5m0s for pod "client-containers-178dd3b3-4ccc-11ea-a6e3-0242ac110005" in namespace "e2e-tests-containers-wkbpj" to be "success or failure"
Feb 11 12:43:24.196: INFO: Pod "client-containers-178dd3b3-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 130.393334ms
Feb 11 12:43:26.226: INFO: Pod "client-containers-178dd3b3-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159937192s
Feb 11 12:43:28.865: INFO: Pod "client-containers-178dd3b3-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.799203714s
Feb 11 12:43:32.022: INFO: Pod "client-containers-178dd3b3-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.955969579s
Feb 11 12:43:34.039: INFO: Pod "client-containers-178dd3b3-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.972567181s
Feb 11 12:43:36.054: INFO: Pod "client-containers-178dd3b3-4ccc-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.988024734s
STEP: Saw pod success
Feb 11 12:43:36.054: INFO: Pod "client-containers-178dd3b3-4ccc-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:43:36.061: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-178dd3b3-4ccc-11ea-a6e3-0242ac110005 container test-container: 
STEP: delete the pod
Feb 11 12:43:36.849: INFO: Waiting for pod client-containers-178dd3b3-4ccc-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:43:36.890: INFO: Pod client-containers-178dd3b3-4ccc-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:43:36.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-wkbpj" for this suite.
Feb 11 12:43:43.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:43:43.175: INFO: namespace: e2e-tests-containers-wkbpj, resource: bindings, ignored listing per whitelist
Feb 11 12:43:43.324: INFO: namespace e2e-tests-containers-wkbpj deletion completed in 6.259920899s

• [SLOW TEST:19.594 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:43:43.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 11 12:43:43.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-232db519-4ccc-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-js6vp" to be "success or failure"
Feb 11 12:43:43.617: INFO: Pod "downwardapi-volume-232db519-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 53.187097ms
Feb 11 12:43:45.648: INFO: Pod "downwardapi-volume-232db519-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084429927s
Feb 11 12:43:47.667: INFO: Pod "downwardapi-volume-232db519-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103519172s
Feb 11 12:43:49.785: INFO: Pod "downwardapi-volume-232db519-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221086061s
Feb 11 12:43:52.076: INFO: Pod "downwardapi-volume-232db519-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512647s
Feb 11 12:43:54.223: INFO: Pod "downwardapi-volume-232db519-4ccc-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.659755072s
STEP: Saw pod success
Feb 11 12:43:54.223: INFO: Pod "downwardapi-volume-232db519-4ccc-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:43:54.230: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-232db519-4ccc-11ea-a6e3-0242ac110005 container client-container: 
STEP: delete the pod
Feb 11 12:43:54.538: INFO: Waiting for pod downwardapi-volume-232db519-4ccc-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:43:54.901: INFO: Pod downwardapi-volume-232db519-4ccc-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:43:54.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-js6vp" for this suite.
Feb 11 12:44:01.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:44:01.381: INFO: namespace: e2e-tests-projected-js6vp, resource: bindings, ignored listing per whitelist
Feb 11 12:44:01.509: INFO: namespace e2e-tests-projected-js6vp deletion completed in 6.591379594s

• [SLOW TEST:18.184 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:44:01.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-55hjj in namespace e2e-tests-proxy-ngzxv
I0211 12:44:01.984165       9 runners.go:184] Created replication controller with name: proxy-service-55hjj, namespace: e2e-tests-proxy-ngzxv, replica count: 1
I0211 12:44:03.035722       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:44:04.036212       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:44:05.036943       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:44:06.037468       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:44:07.037890       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:44:08.038376       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:44:09.039075       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:44:10.039508       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:44:11.039878       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:44:12.040478       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 12:44:13.041095       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 12:44:14.041939       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 12:44:15.042462       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 12:44:16.042948       9 runners.go:184] proxy-service-55hjj Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 11 12:44:16.051: INFO: setup took 14.155600352s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 11 12:44:16.095: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-ngzxv/pods/http:proxy-service-55hjj-scbtq:162/proxy/: bar (200; 43.712802ms)
Feb 11 12:44:16.095: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-ngzxv/pods/proxy-service-55hjj-scbtq:160/proxy/: foo (200; 43.784866ms)
Feb 11 12:44:16.095: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-ngzxv/services/proxy-service-55hjj:portname1/proxy/: foo (200; 43.928143ms)
Feb 11 12:44:16.095: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-ngzxv/pods/http:proxy-service-55hjj-scbtq:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 11 12:44:29.860: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ec3e009-4ccc-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-znrtv" to be "success or failure"
Feb 11 12:44:29.872: INFO: Pod "downwardapi-volume-3ec3e009-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.973919ms
Feb 11 12:44:32.200: INFO: Pod "downwardapi-volume-3ec3e009-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339965879s
Feb 11 12:44:34.217: INFO: Pod "downwardapi-volume-3ec3e009-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356663221s
Feb 11 12:44:36.512: INFO: Pod "downwardapi-volume-3ec3e009-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.651856837s
Feb 11 12:44:38.778: INFO: Pod "downwardapi-volume-3ec3e009-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.91765006s
Feb 11 12:44:40.846: INFO: Pod "downwardapi-volume-3ec3e009-4ccc-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.985556501s
STEP: Saw pod success
Feb 11 12:44:40.846: INFO: Pod "downwardapi-volume-3ec3e009-4ccc-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:44:40.873: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3ec3e009-4ccc-11ea-a6e3-0242ac110005 container client-container: 
STEP: delete the pod
Feb 11 12:44:41.223: INFO: Waiting for pod downwardapi-volume-3ec3e009-4ccc-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:44:41.242: INFO: Pod downwardapi-volume-3ec3e009-4ccc-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:44:41.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-znrtv" for this suite.
Feb 11 12:44:47.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:44:47.463: INFO: namespace: e2e-tests-downward-api-znrtv, resource: bindings, ignored listing per whitelist
Feb 11 12:44:47.521: INFO: namespace e2e-tests-downward-api-znrtv deletion completed in 6.259185446s

• [SLOW TEST:17.888 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:44:47.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:45:47.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-lk58l" for this suite.
Feb 11 12:46:11.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:46:12.006: INFO: namespace: e2e-tests-container-probe-lk58l, resource: bindings, ignored listing per whitelist
Feb 11 12:46:12.066: INFO: namespace e2e-tests-container-probe-lk58l deletion completed in 24.222738123s

• [SLOW TEST:84.545 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:46:12.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb 11 12:46:12.503: INFO: Waiting up to 5m0s for pod "client-containers-7bdd0a09-4ccc-11ea-a6e3-0242ac110005" in namespace "e2e-tests-containers-ltsvl" to be "success or failure"
Feb 11 12:46:12.527: INFO: Pod "client-containers-7bdd0a09-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.7717ms
Feb 11 12:46:14.576: INFO: Pod "client-containers-7bdd0a09-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073325171s
Feb 11 12:46:16.649: INFO: Pod "client-containers-7bdd0a09-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146338724s
Feb 11 12:46:18.926: INFO: Pod "client-containers-7bdd0a09-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423192883s
Feb 11 12:46:20.944: INFO: Pod "client-containers-7bdd0a09-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.440751667s
Feb 11 12:46:22.962: INFO: Pod "client-containers-7bdd0a09-4ccc-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.458882245s
STEP: Saw pod success
Feb 11 12:46:22.962: INFO: Pod "client-containers-7bdd0a09-4ccc-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:46:22.971: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-7bdd0a09-4ccc-11ea-a6e3-0242ac110005 container test-container: 
STEP: delete the pod
Feb 11 12:46:23.026: INFO: Waiting for pod client-containers-7bdd0a09-4ccc-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:46:23.042: INFO: Pod client-containers-7bdd0a09-4ccc-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:46:23.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-ltsvl" for this suite.
Feb 11 12:46:29.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:46:29.349: INFO: namespace: e2e-tests-containers-ltsvl, resource: bindings, ignored listing per whitelist
Feb 11 12:46:29.469: INFO: namespace e2e-tests-containers-ltsvl deletion completed in 6.416425684s

• [SLOW TEST:17.403 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:46:29.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:46:29.699: INFO: Creating deployment "nginx-deployment"
Feb 11 12:46:29.714: INFO: Waiting for observed generation 1
Feb 11 12:46:32.337: INFO: Waiting for all required pods to come up
Feb 11 12:46:32.355: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 11 12:47:12.298: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 11 12:47:12.328: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 11 12:47:12.346: INFO: Updating deployment nginx-deployment
Feb 11 12:47:12.346: INFO: Waiting for observed generation 2
Feb 11 12:47:15.345: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 11 12:47:15.355: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 11 12:47:16.261: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 11 12:47:16.330: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 11 12:47:16.330: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 11 12:47:16.352: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 11 12:47:16.868: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 11 12:47:16.868: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 11 12:47:16.916: INFO: Updating deployment nginx-deployment
Feb 11 12:47:16.916: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 11 12:47:16.934: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 11 12:47:19.492: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 11 12:47:19.763: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7g9xv/deployments/nginx-deployment,UID:86371dae-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313088,Generation:3,CreationTimestamp:2020-02-11 12:46:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-11 12:47:13 +0000 UTC 2020-02-11 12:46:29 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-11 12:47:17 +0000 UTC 2020-02-11 12:47:17 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 11 12:47:21.393: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7g9xv/replicasets/nginx-deployment-5c98f8fb5,UID:9fa3c504-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313081,Generation:3,CreationTimestamp:2020-02-11 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 86371dae-4ccc-11ea-a994-fa163e34d433 0xc001ab9197 0xc001ab9198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 11 12:47:21.393: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 11 12:47:21.394: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7g9xv/replicasets/nginx-deployment-85ddf47c5d,UID:863b9f6f-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313115,Generation:3,CreationTimestamp:2020-02-11 12:46:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 86371dae-4ccc-11ea-a994-fa163e34d433 0xc001ab92b7 0xc001ab92b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 11 12:47:22.068: INFO: Pod "nginx-deployment-5c98f8fb5-5wln7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5wln7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-5wln7,UID:9fc91e70-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313072,Generation:0,CreationTimestamp:2020-02-11 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc00198d1c7 0xc00198d1c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00198d230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00198d250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-11 12:47:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.069: INFO: Pod "nginx-deployment-5c98f8fb5-8r76c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8r76c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-8r76c,UID:a001b626-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313074,Generation:0,CreationTimestamp:2020-02-11 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc00198d5e7 0xc00198d5e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00198d650} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00198d680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-11 12:47:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.069: INFO: Pod "nginx-deployment-5c98f8fb5-8x9qc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8x9qc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-8x9qc,UID:a3df1ffb-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313118,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc00198da37 0xc00198da38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00198daa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00198dac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.070: INFO: Pod "nginx-deployment-5c98f8fb5-b2pdj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b2pdj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-b2pdj,UID:a4588a1b-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313134,Generation:0,CreationTimestamp:2020-02-11 12:47:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc00198dd27 0xc00198dd28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00198dd90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00198ddb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.070: INFO: Pod "nginx-deployment-5c98f8fb5-dm2md" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dm2md,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-dm2md,UID:a458deda-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313138,Generation:0,CreationTimestamp:2020-02-11 12:47:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc00198de87 0xc00198de88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00198def0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00198df10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.070: INFO: Pod "nginx-deployment-5c98f8fb5-hl6gz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hl6gz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-hl6gz,UID:a004925e-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313076,Generation:0,CreationTimestamp:2020-02-11 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc00198df87 0xc00198df88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00198dff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001514060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-11 12:47:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.071: INFO: Pod "nginx-deployment-5c98f8fb5-jqqjx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jqqjx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-jqqjx,UID:a403c2bb-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313127,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc0015143a7 0xc0015143a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001514410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001514430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.071: INFO: Pod "nginx-deployment-5c98f8fb5-kfzvl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kfzvl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-kfzvl,UID:a403e2ee-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313123,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc001514887 0xc001514888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001514c20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001514c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.071: INFO: Pod "nginx-deployment-5c98f8fb5-msz7g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-msz7g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-msz7g,UID:9fb36312-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313056,Generation:0,CreationTimestamp:2020-02-11 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc001514d27 0xc001514d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001514df0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001514e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-11 12:47:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.072: INFO: Pod "nginx-deployment-5c98f8fb5-n8nt6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n8nt6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-n8nt6,UID:a5073222-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313135,Generation:0,CreationTimestamp:2020-02-11 12:47:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc001515367 0xc001515368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015153d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015153f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.072: INFO: Pod "nginx-deployment-5c98f8fb5-qj6k6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qj6k6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-qj6k6,UID:a458e7f2-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313136,Generation:0,CreationTimestamp:2020-02-11 12:47:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc001515450 0xc001515451}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015155d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001515790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.072: INFO: Pod "nginx-deployment-5c98f8fb5-w9wgm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-w9wgm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-w9wgm,UID:a458bcc3-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313137,Generation:0,CreationTimestamp:2020-02-11 12:47:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc0015158e7 0xc0015158e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001515a70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001515a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.072: INFO: Pod "nginx-deployment-5c98f8fb5-wrrv5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wrrv5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-5c98f8fb5-wrrv5,UID:9fc80162-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313070,Generation:0,CreationTimestamp:2020-02-11 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9fa3c504-4ccc-11ea-a994-fa163e34d433 0xc001515b27 0xc001515b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001515ca0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001515cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-11 12:47:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.073: INFO: Pod "nginx-deployment-85ddf47c5d-46jjv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-46jjv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-46jjv,UID:a3e9ab5b-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313122,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc001515d87 0xc001515d88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001515e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001515ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.073: INFO: Pod "nginx-deployment-85ddf47c5d-5rn7j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5rn7j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-5rn7j,UID:a3e9e2d7-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313120,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc001515f17 0xc001515f18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001515f80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001aba500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.073: INFO: Pod "nginx-deployment-85ddf47c5d-66ptb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-66ptb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-66ptb,UID:a3e8f6b1-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313116,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc001aba5f7 0xc001aba5f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001aba700} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001abb250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.074: INFO: Pod "nginx-deployment-85ddf47c5d-7jbxb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7jbxb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-7jbxb,UID:8655ed29-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21312994,Generation:0,CreationTimestamp:2020-02-11 12:46:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc001abb2c7 0xc001abb2c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001abb330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001abb400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:29 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-11 12:46:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 12:47:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f12d48acb394ee1e83e9b6af60a7e09c007c988834bb7f9d67d6bcc754f56242}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.074: INFO: Pod "nginx-deployment-85ddf47c5d-85bgg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-85bgg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-85bgg,UID:8662c5f8-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313002,Generation:0,CreationTimestamp:2020-02-11 12:46:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc001abb4f7 0xc001abb4f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001abb710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001abb730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-11 12:46:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 12:47:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b07f70f56e2f081605ae1a6510fb1c6cf1bc7476343050cf273424585e46c7ff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.074: INFO: Pod "nginx-deployment-85ddf47c5d-9vvlp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9vvlp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-9vvlp,UID:a394d918-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313124,Generation:0,CreationTimestamp:2020-02-11 12:47:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc001abba87 0xc001abba88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018f8150} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018f81e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-11 12:47:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.075: INFO: Pod "nginx-deployment-85ddf47c5d-brs4v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-brs4v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-brs4v,UID:a39a5592-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313094,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0018f8347 0xc0018f8348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018f8460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018f8480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.075: INFO: Pod "nginx-deployment-85ddf47c5d-cb9bh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cb9bh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-cb9bh,UID:a3e01f4e-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313101,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0018f8567 0xc0018f8568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018f9580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018f96d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.076: INFO: Pod "nginx-deployment-85ddf47c5d-ct9kj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ct9kj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-ct9kj,UID:8662877c-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21312988,Generation:0,CreationTimestamp:2020-02-11 12:46:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0018f98d7 0xc0018f98d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012e62a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012e62c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-11 12:46:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 12:47:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5eb6dddef47c2c5139686549392d58ff65fc3022236bdbf1f7ca39a4507d190a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.076: INFO: Pod "nginx-deployment-85ddf47c5d-cxf5n" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cxf5n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-cxf5n,UID:865db9f9-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313021,Generation:0,CreationTimestamp:2020-02-11 12:46:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0012e6417 0xc0012e6418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012e6480} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012e64b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-11 12:46:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 12:47:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ad4fbf46c4106114ad0601fcd90c328513f669114fa77e9f599e1994960e6cd1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.077: INFO: Pod "nginx-deployment-85ddf47c5d-fb862" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fb862,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-fb862,UID:8675f4d4-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313005,Generation:0,CreationTimestamp:2020-02-11 12:46:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0012e6577 0xc0012e6578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012e6780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012e67b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-11 12:46:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 12:47:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://54309cf1e49c4fab2b79481b1502e141fd77b0827a8c15b9b2f5a37d554653aa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.077: INFO: Pod "nginx-deployment-85ddf47c5d-jjxsc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jjxsc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-jjxsc,UID:a3e00350-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313100,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0012e6917 0xc0012e6918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012e6980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012e69a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.077: INFO: Pod "nginx-deployment-85ddf47c5d-p72mq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p72mq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-p72mq,UID:a3dfb2ca-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313102,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0012e6bb7 0xc0012e6bb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012e6f10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012e6f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.078: INFO: Pod "nginx-deployment-85ddf47c5d-pgd9q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pgd9q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-pgd9q,UID:a39a23cd-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313098,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0012e70d7 0xc0012e70d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012e71e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012e7200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.078: INFO: Pod "nginx-deployment-85ddf47c5d-q6qng" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q6qng,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-q6qng,UID:865e6995-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21312998,Generation:0,CreationTimestamp:2020-02-11 12:46:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0012e7587 0xc0012e7588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012e75f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012e7610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-11 12:46:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 12:46:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://11eba023a19992867a6a4b11b48488f0143bb22c977a8afc79647142f999c211}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.078: INFO: Pod "nginx-deployment-85ddf47c5d-rn6sq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rn6sq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-rn6sq,UID:a3e019cb-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313104,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0011c8237 0xc0011c8238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011c82a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011c82c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.078: INFO: Pod "nginx-deployment-85ddf47c5d-srg62" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-srg62,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-srg62,UID:a3e90d88-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313119,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0011c83f7 0xc0011c83f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011c85b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011c85d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.079: INFO: Pod "nginx-deployment-85ddf47c5d-t8rqz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t8rqz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-t8rqz,UID:a3e90553-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313121,Generation:0,CreationTimestamp:2020-02-11 12:47:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0011c8647 0xc0011c8648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011c86b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011c8750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.079: INFO: Pod "nginx-deployment-85ddf47c5d-tx9ws" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tx9ws,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-tx9ws,UID:8662b128-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21313008,Generation:0,CreationTimestamp:2020-02-11 12:46:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0011c8857 0xc0011c8858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011c88c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011c88e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-02-11 12:46:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 12:47:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://06c9e7740546c28305948e203789e2153ba81f432c0ae97d68fcb959ccb4ce3c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 12:47:22.080: INFO: Pod "nginx-deployment-85ddf47c5d-vpc86" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vpc86,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7g9xv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7g9xv/pods/nginx-deployment-85ddf47c5d-vpc86,UID:8662cde3-4ccc-11ea-a994-fa163e34d433,ResourceVersion:21312983,Generation:0,CreationTimestamp:2020-02-11 12:46:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 863b9f6f-4ccc-11ea-a994-fa163e34d433 0xc0011c89a7 0xc0011c89a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xz9lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xz9lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xz9lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011c8a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011c8a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:47:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 12:46:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-11 12:46:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 12:47:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d09a1ac2c2d2574ccf6331137025187d17927d9e6a5182d20cd4886d54e045d4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:47:22.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-7g9xv" for this suite.
Feb 11 12:48:14.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:48:14.850: INFO: namespace: e2e-tests-deployment-7g9xv, resource: bindings, ignored listing per whitelist
Feb 11 12:48:15.029: INFO: namespace e2e-tests-deployment-7g9xv deletion completed in 52.135707843s

• [SLOW TEST:105.559 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:48:15.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 11 12:48:15.619: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:48:52.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-sz9cx" for this suite.
Feb 11 12:49:00.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:49:00.447: INFO: namespace: e2e-tests-init-container-sz9cx, resource: bindings, ignored listing per whitelist
Feb 11 12:49:00.502: INFO: namespace e2e-tests-init-container-sz9cx deletion completed in 8.300878981s

• [SLOW TEST:45.472 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:49:00.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-e040f7fc-4ccc-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 11 12:49:00.790: INFO: Waiting up to 5m0s for pod "pod-secrets-e0425934-4ccc-11ea-a6e3-0242ac110005" in namespace "e2e-tests-secrets-28jb7" to be "success or failure"
Feb 11 12:49:00.804: INFO: Pod "pod-secrets-e0425934-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.753739ms
Feb 11 12:49:03.016: INFO: Pod "pod-secrets-e0425934-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225593762s
Feb 11 12:49:05.026: INFO: Pod "pod-secrets-e0425934-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23558044s
Feb 11 12:49:07.109: INFO: Pod "pod-secrets-e0425934-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.318310783s
Feb 11 12:49:09.120: INFO: Pod "pod-secrets-e0425934-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.330034275s
Feb 11 12:49:11.142: INFO: Pod "pod-secrets-e0425934-4ccc-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.352180562s
STEP: Saw pod success
Feb 11 12:49:11.143: INFO: Pod "pod-secrets-e0425934-4ccc-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:49:11.148: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e0425934-4ccc-11ea-a6e3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb 11 12:49:11.403: INFO: Waiting for pod pod-secrets-e0425934-4ccc-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:49:11.431: INFO: Pod pod-secrets-e0425934-4ccc-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:49:11.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-28jb7" for this suite.
Feb 11 12:49:17.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:49:17.714: INFO: namespace: e2e-tests-secrets-28jb7, resource: bindings, ignored listing per whitelist
Feb 11 12:49:17.759: INFO: namespace e2e-tests-secrets-28jb7 deletion completed in 6.290422208s

• [SLOW TEST:17.257 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:49:17.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 11 12:49:18.111: INFO: Waiting up to 5m0s for pod "pod-ea9364bd-4ccc-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-lczj7" to be "success or failure"
Feb 11 12:49:18.121: INFO: Pod "pod-ea9364bd-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.087777ms
Feb 11 12:49:20.158: INFO: Pod "pod-ea9364bd-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047014853s
Feb 11 12:49:22.181: INFO: Pod "pod-ea9364bd-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069502274s
Feb 11 12:49:24.543: INFO: Pod "pod-ea9364bd-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432400909s
Feb 11 12:49:26.630: INFO: Pod "pod-ea9364bd-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.518516978s
Feb 11 12:49:28.730: INFO: Pod "pod-ea9364bd-4ccc-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.619109325s
STEP: Saw pod success
Feb 11 12:49:28.731: INFO: Pod "pod-ea9364bd-4ccc-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:49:28.743: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ea9364bd-4ccc-11ea-a6e3-0242ac110005 container test-container: 
STEP: delete the pod
Feb 11 12:49:28.894: INFO: Waiting for pod pod-ea9364bd-4ccc-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:49:28.904: INFO: Pod pod-ea9364bd-4ccc-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:49:28.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lczj7" for this suite.
Feb 11 12:49:35.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:49:35.190: INFO: namespace: e2e-tests-emptydir-lczj7, resource: bindings, ignored listing per whitelist
Feb 11 12:49:35.196: INFO: namespace e2e-tests-emptydir-lczj7 deletion completed in 6.270226759s

• [SLOW TEST:17.436 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:49:35.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 11 12:49:35.410: INFO: Waiting up to 5m0s for pod "downward-api-f4e39908-4ccc-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-j85nz" to be "success or failure"
Feb 11 12:49:35.418: INFO: Pod "downward-api-f4e39908-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.863137ms
Feb 11 12:49:37.433: INFO: Pod "downward-api-f4e39908-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023144609s
Feb 11 12:49:39.449: INFO: Pod "downward-api-f4e39908-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03937383s
Feb 11 12:49:41.753: INFO: Pod "downward-api-f4e39908-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343743756s
Feb 11 12:49:43.781: INFO: Pod "downward-api-f4e39908-4ccc-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.371585132s
Feb 11 12:49:45.797: INFO: Pod "downward-api-f4e39908-4ccc-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.387007156s
STEP: Saw pod success
Feb 11 12:49:45.797: INFO: Pod "downward-api-f4e39908-4ccc-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:49:45.805: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f4e39908-4ccc-11ea-a6e3-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb 11 12:49:46.020: INFO: Waiting for pod downward-api-f4e39908-4ccc-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:49:46.026: INFO: Pod downward-api-f4e39908-4ccc-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:49:46.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-j85nz" for this suite.
Feb 11 12:49:52.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:49:52.232: INFO: namespace: e2e-tests-downward-api-j85nz, resource: bindings, ignored listing per whitelist
Feb 11 12:49:52.253: INFO: namespace e2e-tests-downward-api-j85nz deletion completed in 6.218897541s

• [SLOW TEST:17.056 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:49:52.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:50:02.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-c8dxh" for this suite.
Feb 11 12:50:57.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:50:57.275: INFO: namespace: e2e-tests-kubelet-test-c8dxh, resource: bindings, ignored listing per whitelist
Feb 11 12:50:57.284: INFO: namespace e2e-tests-kubelet-test-c8dxh deletion completed in 54.529122158s

• [SLOW TEST:65.030 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:50:57.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 11 12:50:57.551: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 11 12:51:02.581: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:51:03.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-tvlcc" for this suite.
Feb 11 12:51:11.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:51:12.273: INFO: namespace: e2e-tests-replication-controller-tvlcc, resource: bindings, ignored listing per whitelist
Feb 11 12:51:12.347: INFO: namespace e2e-tests-replication-controller-tvlcc deletion completed in 8.569049734s

• [SLOW TEST:15.062 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:51:12.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:51:26.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-ctx7j" for this suite.
Feb 11 12:51:48.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:51:48.745: INFO: namespace: e2e-tests-replication-controller-ctx7j, resource: bindings, ignored listing per whitelist
Feb 11 12:51:48.834: INFO: namespace e2e-tests-replication-controller-ctx7j deletion completed in 22.469182261s

• [SLOW TEST:36.487 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:51:48.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-44a2b10b-4ccd-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 11 12:51:49.287: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-t2d78" to be "success or failure"
Feb 11 12:51:49.322: INFO: Pod "pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.755907ms
Feb 11 12:51:51.338: INFO: Pod "pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051002588s
Feb 11 12:51:53.355: INFO: Pod "pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068362444s
Feb 11 12:51:55.421: INFO: Pod "pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134493563s
Feb 11 12:51:57.437: INFO: Pod "pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149776547s
Feb 11 12:51:59.458: INFO: Pod "pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.170769623s
Feb 11 12:52:01.725: INFO: Pod "pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.438206298s
STEP: Saw pod success
Feb 11 12:52:01.725: INFO: Pod "pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:52:01.733: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb 11 12:52:02.071: INFO: Waiting for pod pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:52:02.166: INFO: Pod pod-projected-secrets-44ad2119-4ccd-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:52:02.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t2d78" for this suite.
Feb 11 12:52:08.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:52:08.363: INFO: namespace: e2e-tests-projected-t2d78, resource: bindings, ignored listing per whitelist
Feb 11 12:52:08.441: INFO: namespace e2e-tests-projected-t2d78 deletion completed in 6.24537951s

• [SLOW TEST:19.607 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:52:08.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-s76dz/secret-test-504bdc26-4ccd-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 11 12:52:08.858: INFO: Waiting up to 5m0s for pod "pod-configmaps-50587ff0-4ccd-11ea-a6e3-0242ac110005" in namespace "e2e-tests-secrets-s76dz" to be "success or failure"
Feb 11 12:52:08.869: INFO: Pod "pod-configmaps-50587ff0-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.283393ms
Feb 11 12:52:10.955: INFO: Pod "pod-configmaps-50587ff0-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09611732s
Feb 11 12:52:12.977: INFO: Pod "pod-configmaps-50587ff0-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118155208s
Feb 11 12:52:15.003: INFO: Pod "pod-configmaps-50587ff0-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144199133s
Feb 11 12:52:17.015: INFO: Pod "pod-configmaps-50587ff0-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156426373s
Feb 11 12:52:19.031: INFO: Pod "pod-configmaps-50587ff0-4ccd-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.172865314s
STEP: Saw pod success
Feb 11 12:52:19.032: INFO: Pod "pod-configmaps-50587ff0-4ccd-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:52:19.036: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-50587ff0-4ccd-11ea-a6e3-0242ac110005 container env-test: 
STEP: delete the pod
Feb 11 12:52:20.125: INFO: Waiting for pod pod-configmaps-50587ff0-4ccd-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:52:20.137: INFO: Pod pod-configmaps-50587ff0-4ccd-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:52:20.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-s76dz" for this suite.
Feb 11 12:52:26.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:52:26.360: INFO: namespace: e2e-tests-secrets-s76dz, resource: bindings, ignored listing per whitelist
Feb 11 12:52:26.408: INFO: namespace e2e-tests-secrets-s76dz deletion completed in 6.264839423s

• [SLOW TEST:17.966 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:52:26.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:52:26.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 11 12:52:26.968: INFO: stderr: ""
Feb 11 12:52:26.968: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:52:26.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-57fp4" for this suite.
Feb 11 12:52:33.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:52:33.056: INFO: namespace: e2e-tests-kubectl-57fp4, resource: bindings, ignored listing per whitelist
Feb 11 12:52:33.226: INFO: namespace e2e-tests-kubectl-57fp4 deletion completed in 6.247640591s

• [SLOW TEST:6.818 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:52:33.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:52:33.448: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 11 12:52:33.470: INFO: Number of nodes with available pods: 0
Feb 11 12:52:33.470: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 11 12:52:33.630: INFO: Number of nodes with available pods: 0
Feb 11 12:52:33.630: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:34.687: INFO: Number of nodes with available pods: 0
Feb 11 12:52:34.687: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:35.643: INFO: Number of nodes with available pods: 0
Feb 11 12:52:35.643: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:36.663: INFO: Number of nodes with available pods: 0
Feb 11 12:52:36.663: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:37.675: INFO: Number of nodes with available pods: 0
Feb 11 12:52:37.676: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:38.643: INFO: Number of nodes with available pods: 0
Feb 11 12:52:38.643: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:39.663: INFO: Number of nodes with available pods: 0
Feb 11 12:52:39.664: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:40.915: INFO: Number of nodes with available pods: 0
Feb 11 12:52:40.915: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:41.947: INFO: Number of nodes with available pods: 0
Feb 11 12:52:41.947: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:42.656: INFO: Number of nodes with available pods: 0
Feb 11 12:52:42.656: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:43.647: INFO: Number of nodes with available pods: 0
Feb 11 12:52:43.648: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:44.643: INFO: Number of nodes with available pods: 1
Feb 11 12:52:44.643: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 11 12:52:44.742: INFO: Number of nodes with available pods: 1
Feb 11 12:52:44.742: INFO: Number of running nodes: 0, number of available pods: 1
Feb 11 12:52:45.764: INFO: Number of nodes with available pods: 0
Feb 11 12:52:45.765: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 11 12:52:45.805: INFO: Number of nodes with available pods: 0
Feb 11 12:52:45.805: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:47.278: INFO: Number of nodes with available pods: 0
Feb 11 12:52:47.278: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:48.751: INFO: Number of nodes with available pods: 0
Feb 11 12:52:48.751: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:49.045: INFO: Number of nodes with available pods: 0
Feb 11 12:52:49.045: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:49.838: INFO: Number of nodes with available pods: 0
Feb 11 12:52:49.838: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:51.908: INFO: Number of nodes with available pods: 0
Feb 11 12:52:51.908: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:52.822: INFO: Number of nodes with available pods: 0
Feb 11 12:52:52.822: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:53.833: INFO: Number of nodes with available pods: 0
Feb 11 12:52:53.834: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:54.816: INFO: Number of nodes with available pods: 0
Feb 11 12:52:54.816: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:55.819: INFO: Number of nodes with available pods: 0
Feb 11 12:52:55.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:56.815: INFO: Number of nodes with available pods: 0
Feb 11 12:52:56.815: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:57.817: INFO: Number of nodes with available pods: 0
Feb 11 12:52:57.818: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:52:58.832: INFO: Number of nodes with available pods: 0
Feb 11 12:52:58.832: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:00.343: INFO: Number of nodes with available pods: 0
Feb 11 12:53:00.343: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:00.819: INFO: Number of nodes with available pods: 0
Feb 11 12:53:00.820: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:01.944: INFO: Number of nodes with available pods: 0
Feb 11 12:53:01.944: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:02.825: INFO: Number of nodes with available pods: 0
Feb 11 12:53:02.825: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:03.832: INFO: Number of nodes with available pods: 0
Feb 11 12:53:03.832: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:04.820: INFO: Number of nodes with available pods: 1
Feb 11 12:53:04.821: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kp6hv, will wait for the garbage collector to delete the pods
Feb 11 12:53:04.923: INFO: Deleting DaemonSet.extensions daemon-set took: 29.728606ms
Feb 11 12:53:05.023: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.672418ms
Feb 11 12:53:11.049: INFO: Number of nodes with available pods: 0
Feb 11 12:53:11.049: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 12:53:11.054: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kp6hv/daemonsets","resourceVersion":"21314019"},"items":null}

Feb 11 12:53:11.057: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kp6hv/pods","resourceVersion":"21314019"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:53:11.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-kp6hv" for this suite.
Feb 11 12:53:17.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:53:17.325: INFO: namespace: e2e-tests-daemonsets-kp6hv, resource: bindings, ignored listing per whitelist
Feb 11 12:53:17.377: INFO: namespace e2e-tests-daemonsets-kp6hv deletion completed in 6.256105419s

• [SLOW TEST:44.150 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:53:17.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 12:53:17.578: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:53:18.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-25tkq" for this suite.
Feb 11 12:53:24.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:53:25.065: INFO: namespace: e2e-tests-custom-resource-definition-25tkq, resource: bindings, ignored listing per whitelist
Feb 11 12:53:25.111: INFO: namespace e2e-tests-custom-resource-definition-25tkq deletion completed in 6.240131509s

• [SLOW TEST:7.734 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:53:25.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-7de4cccb-4ccd-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 11 12:53:25.333: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-9jnhc" to be "success or failure"
Feb 11 12:53:25.382: INFO: Pod "pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.581971ms
Feb 11 12:53:27.552: INFO: Pod "pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219471249s
Feb 11 12:53:29.590: INFO: Pod "pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256912149s
Feb 11 12:53:32.828: INFO: Pod "pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.494628036s
Feb 11 12:53:34.844: INFO: Pod "pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.511426031s
Feb 11 12:53:36.868: INFO: Pod "pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.535280446s
Feb 11 12:53:38.902: INFO: Pod "pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.569067034s
STEP: Saw pod success
Feb 11 12:53:38.902: INFO: Pod "pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:53:38.913: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb 11 12:53:39.224: INFO: Waiting for pod pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:53:39.252: INFO: Pod pod-projected-secrets-7de63d12-4ccd-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:53:39.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9jnhc" for this suite.
Feb 11 12:53:47.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:53:47.392: INFO: namespace: e2e-tests-projected-9jnhc, resource: bindings, ignored listing per whitelist
Feb 11 12:53:47.443: INFO: namespace e2e-tests-projected-9jnhc deletion completed in 8.181239721s

• [SLOW TEST:22.331 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:53:47.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 11 12:53:47.631: INFO: Number of nodes with available pods: 0
Feb 11 12:53:47.631: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:48.660: INFO: Number of nodes with available pods: 0
Feb 11 12:53:48.661: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:51.344: INFO: Number of nodes with available pods: 0
Feb 11 12:53:51.344: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:53.370: INFO: Number of nodes with available pods: 0
Feb 11 12:53:53.371: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:53.801: INFO: Number of nodes with available pods: 0
Feb 11 12:53:53.801: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:54.718: INFO: Number of nodes with available pods: 0
Feb 11 12:53:54.718: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:55.653: INFO: Number of nodes with available pods: 0
Feb 11 12:53:55.653: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:56.751: INFO: Number of nodes with available pods: 0
Feb 11 12:53:56.751: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:58.259: INFO: Number of nodes with available pods: 0
Feb 11 12:53:58.260: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:58.691: INFO: Number of nodes with available pods: 0
Feb 11 12:53:58.691: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:53:59.658: INFO: Number of nodes with available pods: 0
Feb 11 12:53:59.658: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:00.683: INFO: Number of nodes with available pods: 1
Feb 11 12:54:00.683: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 11 12:54:00.851: INFO: Number of nodes with available pods: 0
Feb 11 12:54:00.851: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:02.505: INFO: Number of nodes with available pods: 0
Feb 11 12:54:02.505: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:02.869: INFO: Number of nodes with available pods: 0
Feb 11 12:54:02.869: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:04.715: INFO: Number of nodes with available pods: 0
Feb 11 12:54:04.715: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:04.940: INFO: Number of nodes with available pods: 0
Feb 11 12:54:04.940: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:05.886: INFO: Number of nodes with available pods: 0
Feb 11 12:54:05.886: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:06.870: INFO: Number of nodes with available pods: 0
Feb 11 12:54:06.871: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:07.897: INFO: Number of nodes with available pods: 0
Feb 11 12:54:07.897: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:09.687: INFO: Number of nodes with available pods: 0
Feb 11 12:54:09.687: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:10.074: INFO: Number of nodes with available pods: 0
Feb 11 12:54:10.074: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:10.883: INFO: Number of nodes with available pods: 0
Feb 11 12:54:10.883: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:13.668: INFO: Number of nodes with available pods: 0
Feb 11 12:54:13.668: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:14.002: INFO: Number of nodes with available pods: 0
Feb 11 12:54:14.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:15.011: INFO: Number of nodes with available pods: 0
Feb 11 12:54:15.012: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:15.875: INFO: Number of nodes with available pods: 0
Feb 11 12:54:15.875: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 11 12:54:16.876: INFO: Number of nodes with available pods: 1
Feb 11 12:54:16.876: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-59g5k, will wait for the garbage collector to delete the pods
Feb 11 12:54:16.961: INFO: Deleting DaemonSet.extensions daemon-set took: 23.042025ms
Feb 11 12:54:17.162: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.563166ms
Feb 11 12:54:28.974: INFO: Number of nodes with available pods: 0
Feb 11 12:54:28.974: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 12:54:28.978: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-59g5k/daemonsets","resourceVersion":"21314208"},"items":null}

Feb 11 12:54:28.982: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-59g5k/pods","resourceVersion":"21314208"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:54:28.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-59g5k" for this suite.
Feb 11 12:54:35.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:54:35.118: INFO: namespace: e2e-tests-daemonsets-59g5k, resource: bindings, ignored listing per whitelist
Feb 11 12:54:35.182: INFO: namespace e2e-tests-daemonsets-59g5k deletion completed in 6.181899247s

• [SLOW TEST:47.739 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:54:35.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-a7cdf9c2-4ccd-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 11 12:54:35.775: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-8pkkh" to be "success or failure"
Feb 11 12:54:35.901: INFO: Pod "pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 126.004946ms
Feb 11 12:54:38.173: INFO: Pod "pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398289975s
Feb 11 12:54:40.209: INFO: Pod "pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433891189s
Feb 11 12:54:42.988: INFO: Pod "pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.213142474s
Feb 11 12:54:45.201: INFO: Pod "pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.426023874s
Feb 11 12:54:49.078: INFO: Pod "pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.303349953s
Feb 11 12:54:52.635: INFO: Pod "pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.860445472s
Feb 11 12:54:57.033: INFO: Pod "pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.257835278s
STEP: Saw pod success
Feb 11 12:54:57.033: INFO: Pod "pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:54:57.062: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb 11 12:54:58.150: INFO: Waiting for pod pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:54:58.166: INFO: Pod pod-projected-secrets-a7e792a4-4ccd-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:54:58.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8pkkh" for this suite.
Feb 11 12:55:04.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:55:04.522: INFO: namespace: e2e-tests-projected-8pkkh, resource: bindings, ignored listing per whitelist
Feb 11 12:55:04.529: INFO: namespace e2e-tests-projected-8pkkh deletion completed in 6.35490084s

• [SLOW TEST:29.347 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:55:04.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:55:21.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-bsvsl" for this suite.
Feb 11 12:56:15.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:56:15.929: INFO: namespace: e2e-tests-kubelet-test-bsvsl, resource: bindings, ignored listing per whitelist
Feb 11 12:56:16.006: INFO: namespace e2e-tests-kubelet-test-bsvsl deletion completed in 54.428178305s

• [SLOW TEST:71.476 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:56:16.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-2tm9f
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-2tm9f
STEP: Deleting pre-stop pod
Feb 11 12:56:45.483: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:56:45.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-2tm9f" for this suite.
Feb 11 12:57:25.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:57:25.865: INFO: namespace: e2e-tests-prestop-2tm9f, resource: bindings, ignored listing per whitelist
Feb 11 12:57:25.913: INFO: namespace e2e-tests-prestop-2tm9f deletion completed in 40.322703219s

• [SLOW TEST:69.906 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:57:25.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 11 12:57:26.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-c8rht" to be "success or failure"
Feb 11 12:57:26.193: INFO: Pod "downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.735903ms
Feb 11 12:57:28.714: INFO: Pod "downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.537054324s
Feb 11 12:57:30.729: INFO: Pod "downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.551749913s
Feb 11 12:57:33.385: INFO: Pod "downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.208021854s
Feb 11 12:57:35.406: INFO: Pod "downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.229084929s
Feb 11 12:57:37.751: INFO: Pod "downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.573822929s
Feb 11 12:57:40.197: INFO: Pod "downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.020126848s
Feb 11 12:57:42.215: INFO: Pod "downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.038469143s
STEP: Saw pod success
Feb 11 12:57:42.216: INFO: Pod "downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:57:42.224: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005 container client-container: 
STEP: delete the pod
Feb 11 12:57:42.928: INFO: Waiting for pod downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:57:43.240: INFO: Pod downwardapi-volume-0d7f77e1-4cce-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:57:43.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c8rht" for this suite.
Feb 11 12:57:53.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:57:53.358: INFO: namespace: e2e-tests-projected-c8rht, resource: bindings, ignored listing per whitelist
Feb 11 12:57:53.499: INFO: namespace e2e-tests-projected-c8rht deletion completed in 10.227972543s

• [SLOW TEST:27.586 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:57:53.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-1debb217-4cce-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 11 12:57:53.951: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005" in namespace "e2e-tests-configmap-8ssj4" to be "success or failure"
Feb 11 12:57:54.024: INFO: Pod "pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 73.215156ms
Feb 11 12:57:56.428: INFO: Pod "pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476910154s
Feb 11 12:57:58.441: INFO: Pod "pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489744386s
Feb 11 12:58:00.576: INFO: Pod "pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.624530964s
Feb 11 12:58:02.608: INFO: Pod "pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.657184922s
Feb 11 12:58:04.695: INFO: Pod "pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.743875272s
Feb 11 12:58:06.720: INFO: Pod "pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.768533997s
STEP: Saw pod success
Feb 11 12:58:06.720: INFO: Pod "pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:58:06.729: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb 11 12:58:06.831: INFO: Waiting for pod pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:58:06.851: INFO: Pod pod-configmaps-1e00b9db-4cce-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:58:06.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-8ssj4" for this suite.
Feb 11 12:58:12.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:58:12.993: INFO: namespace: e2e-tests-configmap-8ssj4, resource: bindings, ignored listing per whitelist
Feb 11 12:58:13.075: INFO: namespace e2e-tests-configmap-8ssj4 deletion completed in 6.206588977s

• [SLOW TEST:19.576 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:58:13.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-29946888-4cce-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 11 12:58:13.361: INFO: Waiting up to 5m0s for pod "pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005" in namespace "e2e-tests-configmap-rx6x2" to be "success or failure"
Feb 11 12:58:13.374: INFO: Pod "pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.973626ms
Feb 11 12:58:15.400: INFO: Pod "pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038525316s
Feb 11 12:58:17.423: INFO: Pod "pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061782926s
Feb 11 12:58:19.437: INFO: Pod "pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076333501s
Feb 11 12:58:21.451: INFO: Pod "pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089479887s
Feb 11 12:58:23.532: INFO: Pod "pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.17082009s
Feb 11 12:58:25.549: INFO: Pod "pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.188109242s
STEP: Saw pod success
Feb 11 12:58:25.549: INFO: Pod "pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:58:25.557: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb 11 12:58:27.242: INFO: Waiting for pod pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:58:27.256: INFO: Pod pod-configmaps-29961c0a-4cce-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:58:27.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rx6x2" for this suite.
Feb 11 12:58:33.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:58:33.540: INFO: namespace: e2e-tests-configmap-rx6x2, resource: bindings, ignored listing per whitelist
Feb 11 12:58:33.836: INFO: namespace e2e-tests-configmap-rx6x2 deletion completed in 6.561008322s

• [SLOW TEST:20.760 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:58:33.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-360376ac-4cce-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 11 12:58:34.285: INFO: Waiting up to 5m0s for pod "pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005" in namespace "e2e-tests-configmap-k5nm4" to be "success or failure"
Feb 11 12:58:34.471: INFO: Pod "pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 185.379145ms
Feb 11 12:58:36.499: INFO: Pod "pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213777139s
Feb 11 12:58:38.595: INFO: Pod "pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310015212s
Feb 11 12:58:40.616: INFO: Pod "pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.330945493s
Feb 11 12:58:42.782: INFO: Pod "pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.496361382s
Feb 11 12:58:44.816: INFO: Pod "pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.530599838s
Feb 11 12:58:46.836: INFO: Pod "pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.551123505s
STEP: Saw pod success
Feb 11 12:58:46.837: INFO: Pod "pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:58:46.843: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb 11 12:58:46.944: INFO: Waiting for pod pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:58:46.964: INFO: Pod pod-configmaps-3605e5d4-4cce-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:58:46.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-k5nm4" for this suite.
Feb 11 12:58:55.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:58:55.326: INFO: namespace: e2e-tests-configmap-k5nm4, resource: bindings, ignored listing per whitelist
Feb 11 12:58:55.409: INFO: namespace e2e-tests-configmap-k5nm4 deletion completed in 8.437406036s

• [SLOW TEST:21.571 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:58:55.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-42d3890d-4cce-11ea-a6e3-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-42d388e2-4cce-11ea-a6e3-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 11 12:58:55.678: INFO: Waiting up to 5m0s for pod "projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-xctgn" to be "success or failure"
Feb 11 12:58:55.852: INFO: Pod "projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 174.305395ms
Feb 11 12:58:57.874: INFO: Pod "projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196082958s
Feb 11 12:58:59.909: INFO: Pod "projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231637558s
Feb 11 12:59:01.931: INFO: Pod "projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253267307s
Feb 11 12:59:03.946: INFO: Pod "projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.268677927s
Feb 11 12:59:05.961: INFO: Pod "projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.283492832s
Feb 11 12:59:07.978: INFO: Pod "projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.30018099s
STEP: Saw pod success
Feb 11 12:59:07.978: INFO: Pod "projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 12:59:07.987: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Feb 11 12:59:08.401: INFO: Waiting for pod projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005 to disappear
Feb 11 12:59:08.622: INFO: Pod projected-volume-42d387c7-4cce-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:59:08.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xctgn" for this suite.
Feb 11 12:59:16.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 12:59:16.777: INFO: namespace: e2e-tests-projected-xctgn, resource: bindings, ignored listing per whitelist
Feb 11 12:59:16.860: INFO: namespace e2e-tests-projected-xctgn deletion completed in 8.220475304s

• [SLOW TEST:21.450 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 12:59:16.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0211 12:59:57.201346       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 12:59:57.201: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 12:59:57.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-lmdt4" for this suite.
Feb 11 13:00:25.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:00:25.324: INFO: namespace: e2e-tests-gc-lmdt4, resource: bindings, ignored listing per whitelist
Feb 11 13:00:25.583: INFO: namespace e2e-tests-gc-lmdt4 deletion completed in 28.373587293s

• [SLOW TEST:68.722 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:00:25.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 11 13:00:25.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-dh5sg'
Feb 11 13:00:28.660: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 11 13:00:28.660: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb 11 13:00:28.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-dh5sg'
Feb 11 13:00:29.018: INFO: stderr: ""
Feb 11 13:00:29.019: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:00:29.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dh5sg" for this suite.
Feb 11 13:00:37.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:00:37.363: INFO: namespace: e2e-tests-kubectl-dh5sg, resource: bindings, ignored listing per whitelist
Feb 11 13:00:37.389: INFO: namespace e2e-tests-kubectl-dh5sg deletion completed in 8.359429576s

• [SLOW TEST:11.806 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:00:37.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:00:50.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-694dz" for this suite.
Feb 11 13:00:58.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:00:58.432: INFO: namespace: e2e-tests-kubelet-test-694dz, resource: bindings, ignored listing per whitelist
Feb 11 13:00:58.593: INFO: namespace e2e-tests-kubelet-test-694dz deletion completed in 8.34791154s

• [SLOW TEST:21.203 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:00:58.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 11 13:00:58.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:00:59.340: INFO: stderr: ""
Feb 11 13:00:59.341: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 13:00:59.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:00:59.550: INFO: stderr: ""
Feb 11 13:00:59.551: INFO: stdout: "update-demo-nautilus-75v4r update-demo-nautilus-77tcx "
Feb 11 13:00:59.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75v4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:00:59.753: INFO: stderr: ""
Feb 11 13:00:59.754: INFO: stdout: ""
Feb 11 13:00:59.754: INFO: update-demo-nautilus-75v4r is created but not running
Feb 11 13:01:04.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:04.944: INFO: stderr: ""
Feb 11 13:01:04.944: INFO: stdout: "update-demo-nautilus-75v4r update-demo-nautilus-77tcx "
Feb 11 13:01:04.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75v4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:05.097: INFO: stderr: ""
Feb 11 13:01:05.097: INFO: stdout: ""
Feb 11 13:01:05.097: INFO: update-demo-nautilus-75v4r is created but not running
Feb 11 13:01:10.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:10.245: INFO: stderr: ""
Feb 11 13:01:10.245: INFO: stdout: "update-demo-nautilus-75v4r update-demo-nautilus-77tcx "
Feb 11 13:01:10.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75v4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:10.377: INFO: stderr: ""
Feb 11 13:01:10.377: INFO: stdout: ""
Feb 11 13:01:10.377: INFO: update-demo-nautilus-75v4r is created but not running
Feb 11 13:01:15.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:15.552: INFO: stderr: ""
Feb 11 13:01:15.552: INFO: stdout: "update-demo-nautilus-75v4r update-demo-nautilus-77tcx "
Feb 11 13:01:15.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75v4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:15.712: INFO: stderr: ""
Feb 11 13:01:15.713: INFO: stdout: "true"
Feb 11 13:01:15.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75v4r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:15.818: INFO: stderr: ""
Feb 11 13:01:15.819: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 13:01:15.819: INFO: validating pod update-demo-nautilus-75v4r
Feb 11 13:01:15.962: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 13:01:15.962: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 13:01:15.962: INFO: update-demo-nautilus-75v4r is verified up and running
Feb 11 13:01:15.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77tcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:16.155: INFO: stderr: ""
Feb 11 13:01:16.155: INFO: stdout: "true"
Feb 11 13:01:16.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77tcx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:16.272: INFO: stderr: ""
Feb 11 13:01:16.272: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 13:01:16.272: INFO: validating pod update-demo-nautilus-77tcx
Feb 11 13:01:16.285: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 13:01:16.285: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 13:01:16.285: INFO: update-demo-nautilus-77tcx is verified up and running
STEP: scaling down the replication controller
Feb 11 13:01:16.288: INFO: scanned /root for discovery docs: 
Feb 11 13:01:16.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:18.630: INFO: stderr: ""
Feb 11 13:01:18.630: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 13:01:18.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:19.501: INFO: stderr: ""
Feb 11 13:01:19.501: INFO: stdout: "update-demo-nautilus-75v4r update-demo-nautilus-77tcx "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 11 13:01:24.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:24.683: INFO: stderr: ""
Feb 11 13:01:24.684: INFO: stdout: "update-demo-nautilus-75v4r update-demo-nautilus-77tcx "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 11 13:01:29.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:29.892: INFO: stderr: ""
Feb 11 13:01:29.892: INFO: stdout: "update-demo-nautilus-75v4r update-demo-nautilus-77tcx "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 11 13:01:34.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:35.145: INFO: stderr: ""
Feb 11 13:01:35.145: INFO: stdout: "update-demo-nautilus-77tcx "
Feb 11 13:01:35.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77tcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:35.235: INFO: stderr: ""
Feb 11 13:01:35.235: INFO: stdout: "true"
Feb 11 13:01:35.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77tcx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:35.323: INFO: stderr: ""
Feb 11 13:01:35.323: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 13:01:35.323: INFO: validating pod update-demo-nautilus-77tcx
Feb 11 13:01:35.334: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 13:01:35.334: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 13:01:35.334: INFO: update-demo-nautilus-77tcx is verified up and running
STEP: scaling up the replication controller
Feb 11 13:01:35.336: INFO: scanned /root for discovery docs: 
Feb 11 13:01:35.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:36.712: INFO: stderr: ""
Feb 11 13:01:36.713: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 13:01:36.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:36.864: INFO: stderr: ""
Feb 11 13:01:36.864: INFO: stdout: "update-demo-nautilus-77tcx update-demo-nautilus-9j62g "
Feb 11 13:01:36.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77tcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:37.067: INFO: stderr: ""
Feb 11 13:01:37.067: INFO: stdout: "true"
Feb 11 13:01:37.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77tcx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:37.391: INFO: stderr: ""
Feb 11 13:01:37.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 13:01:37.391: INFO: validating pod update-demo-nautilus-77tcx
Feb 11 13:01:37.405: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 13:01:37.405: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 13:01:37.405: INFO: update-demo-nautilus-77tcx is verified up and running
Feb 11 13:01:37.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9j62g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:37.609: INFO: stderr: ""
Feb 11 13:01:37.610: INFO: stdout: ""
Feb 11 13:01:37.610: INFO: update-demo-nautilus-9j62g is created but not running
Feb 11 13:01:42.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:42.947: INFO: stderr: ""
Feb 11 13:01:42.947: INFO: stdout: "update-demo-nautilus-77tcx update-demo-nautilus-9j62g "
Feb 11 13:01:42.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77tcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:43.293: INFO: stderr: ""
Feb 11 13:01:43.293: INFO: stdout: "true"
Feb 11 13:01:43.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77tcx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:43.490: INFO: stderr: ""
Feb 11 13:01:43.491: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 13:01:43.491: INFO: validating pod update-demo-nautilus-77tcx
Feb 11 13:01:43.508: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 13:01:43.508: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 13:01:43.508: INFO: update-demo-nautilus-77tcx is verified up and running
Feb 11 13:01:43.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9j62g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:43.684: INFO: stderr: ""
Feb 11 13:01:43.684: INFO: stdout: ""
Feb 11 13:01:43.684: INFO: update-demo-nautilus-9j62g is created but not running
Feb 11 13:01:48.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:48.840: INFO: stderr: ""
Feb 11 13:01:48.840: INFO: stdout: "update-demo-nautilus-77tcx update-demo-nautilus-9j62g "
Feb 11 13:01:48.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77tcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:49.023: INFO: stderr: ""
Feb 11 13:01:49.023: INFO: stdout: "true"
Feb 11 13:01:49.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77tcx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:49.226: INFO: stderr: ""
Feb 11 13:01:49.226: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 13:01:49.226: INFO: validating pod update-demo-nautilus-77tcx
Feb 11 13:01:49.249: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 13:01:49.249: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 13:01:49.249: INFO: update-demo-nautilus-77tcx is verified up and running
Feb 11 13:01:49.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9j62g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:49.368: INFO: stderr: ""
Feb 11 13:01:49.368: INFO: stdout: "true"
Feb 11 13:01:49.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9j62g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:49.466: INFO: stderr: ""
Feb 11 13:01:49.466: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 13:01:49.466: INFO: validating pod update-demo-nautilus-9j62g
Feb 11 13:01:49.476: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 13:01:49.476: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 13:01:49.476: INFO: update-demo-nautilus-9j62g is verified up and running
STEP: using delete to clean up resources
Feb 11 13:01:49.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:49.653: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 13:01:49.654: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 11 13:01:49.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-6mmwv'
Feb 11 13:01:49.983: INFO: stderr: "No resources found.\n"
Feb 11 13:01:49.984: INFO: stdout: ""
Feb 11 13:01:49.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-6mmwv -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 11 13:01:50.206: INFO: stderr: ""
Feb 11 13:01:50.207: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:01:50.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6mmwv" for this suite.
Feb 11 13:02:15.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:02:16.013: INFO: namespace: e2e-tests-kubectl-6mmwv, resource: bindings, ignored listing per whitelist
Feb 11 13:02:16.112: INFO: namespace e2e-tests-kubectl-6mmwv deletion completed in 25.412509035s

• [SLOW TEST:77.519 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:02:16.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7vtrn
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 11 13:02:16.380: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 11 13:02:54.659: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7vtrn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 13:02:54.659: INFO: >>> kubeConfig: /root/.kube/config
I0211 13:02:54.753856       9 log.go:172] (0xc00092fa20) (0xc0001a30e0) Create stream
I0211 13:02:54.754110       9 log.go:172] (0xc00092fa20) (0xc0001a30e0) Stream added, broadcasting: 1
I0211 13:02:54.760874       9 log.go:172] (0xc00092fa20) Reply frame received for 1
I0211 13:02:54.760915       9 log.go:172] (0xc00092fa20) (0xc000110e60) Create stream
I0211 13:02:54.760923       9 log.go:172] (0xc00092fa20) (0xc000110e60) Stream added, broadcasting: 3
I0211 13:02:54.762164       9 log.go:172] (0xc00092fa20) Reply frame received for 3
I0211 13:02:54.762183       9 log.go:172] (0xc00092fa20) (0xc000111ae0) Create stream
I0211 13:02:54.762193       9 log.go:172] (0xc00092fa20) (0xc000111ae0) Stream added, broadcasting: 5
I0211 13:02:54.763605       9 log.go:172] (0xc00092fa20) Reply frame received for 5
I0211 13:02:54.906195       9 log.go:172] (0xc00092fa20) Data frame received for 3
I0211 13:02:54.906406       9 log.go:172] (0xc000110e60) (3) Data frame handling
I0211 13:02:54.906432       9 log.go:172] (0xc000110e60) (3) Data frame sent
I0211 13:02:55.108776       9 log.go:172] (0xc00092fa20) (0xc000110e60) Stream removed, broadcasting: 3
I0211 13:02:55.109018       9 log.go:172] (0xc00092fa20) (0xc000111ae0) Stream removed, broadcasting: 5
I0211 13:02:55.109073       9 log.go:172] (0xc00092fa20) Data frame received for 1
I0211 13:02:55.109116       9 log.go:172] (0xc0001a30e0) (1) Data frame handling
I0211 13:02:55.109166       9 log.go:172] (0xc0001a30e0) (1) Data frame sent
I0211 13:02:55.109180       9 log.go:172] (0xc00092fa20) (0xc0001a30e0) Stream removed, broadcasting: 1
I0211 13:02:55.109198       9 log.go:172] (0xc00092fa20) Go away received
I0211 13:02:55.109675       9 log.go:172] (0xc00092fa20) (0xc0001a30e0) Stream removed, broadcasting: 1
I0211 13:02:55.109691       9 log.go:172] (0xc00092fa20) (0xc000110e60) Stream removed, broadcasting: 3
I0211 13:02:55.109696       9 log.go:172] (0xc00092fa20) (0xc000111ae0) Stream removed, broadcasting: 5
Feb 11 13:02:55.109: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:02:55.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-7vtrn" for this suite.
Feb 11 13:03:21.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:03:21.352: INFO: namespace: e2e-tests-pod-network-test-7vtrn, resource: bindings, ignored listing per whitelist
Feb 11 13:03:21.425: INFO: namespace e2e-tests-pod-network-test-7vtrn deletion completed in 26.278577732s

• [SLOW TEST:65.312 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:03:21.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 11 13:03:21.667: INFO: Waiting up to 5m0s for pod "pod-e160391e-4cce-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-vvwst" to be "success or failure"
Feb 11 13:03:21.680: INFO: Pod "pod-e160391e-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.405438ms
Feb 11 13:03:23.704: INFO: Pod "pod-e160391e-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035900629s
Feb 11 13:03:25.723: INFO: Pod "pod-e160391e-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055442125s
Feb 11 13:03:28.406: INFO: Pod "pod-e160391e-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.737904674s
Feb 11 13:03:30.431: INFO: Pod "pod-e160391e-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.762949519s
Feb 11 13:03:32.446: INFO: Pod "pod-e160391e-4cce-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.778026082s
STEP: Saw pod success
Feb 11 13:03:32.446: INFO: Pod "pod-e160391e-4cce-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 13:03:32.454: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e160391e-4cce-11ea-a6e3-0242ac110005 container test-container: 
STEP: delete the pod
Feb 11 13:03:32.609: INFO: Waiting for pod pod-e160391e-4cce-11ea-a6e3-0242ac110005 to disappear
Feb 11 13:03:33.574: INFO: Pod pod-e160391e-4cce-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:03:33.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vvwst" for this suite.
Feb 11 13:03:39.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:03:40.018: INFO: namespace: e2e-tests-emptydir-vvwst, resource: bindings, ignored listing per whitelist
Feb 11 13:03:40.075: INFO: namespace e2e-tests-emptydir-vvwst deletion completed in 6.27409161s

• [SLOW TEST:18.650 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:03:40.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 11 13:03:52.700: INFO: Waiting up to 5m0s for pod "client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005" in namespace "e2e-tests-pods-qslkk" to be "success or failure"
Feb 11 13:03:52.705: INFO: Pod "client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.284387ms
Feb 11 13:03:55.554: INFO: Pod "client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.853832287s
Feb 11 13:03:57.673: INFO: Pod "client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.97255711s
Feb 11 13:03:59.684: INFO: Pod "client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.984295145s
Feb 11 13:04:02.942: INFO: Pod "client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.241693348s
Feb 11 13:04:04.956: INFO: Pod "client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.255760304s
Feb 11 13:04:06.978: INFO: Pod "client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.277685144s
Feb 11 13:04:08.996: INFO: Pod "client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.296043204s
STEP: Saw pod success
Feb 11 13:04:08.996: INFO: Pod "client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 13:04:09.001: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005 container env3cont: 
STEP: delete the pod
Feb 11 13:04:09.088: INFO: Waiting for pod client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005 to disappear
Feb 11 13:04:09.102: INFO: Pod client-envvars-f3d004b4-4cce-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:04:09.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-qslkk" for this suite.
Feb 11 13:04:53.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:04:53.311: INFO: namespace: e2e-tests-pods-qslkk, resource: bindings, ignored listing per whitelist
Feb 11 13:04:53.407: INFO: namespace e2e-tests-pods-qslkk deletion completed in 44.286985862s

• [SLOW TEST:73.332 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:04:53.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 11 13:05:17.897: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 13:05:17.926: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 13:05:19.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 13:05:19.955: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 13:05:21.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 13:05:21.973: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 13:05:23.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 13:05:23.942: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 13:05:25.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 13:05:25.941: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 13:05:27.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 13:05:27.946: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 13:05:29.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 13:05:29.956: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 13:05:31.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 13:05:31.945: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 13:05:33.927: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 13:05:33.947: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:05:33.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-fd5tx" for this suite.
Feb 11 13:05:58.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:05:58.165: INFO: namespace: e2e-tests-container-lifecycle-hook-fd5tx, resource: bindings, ignored listing per whitelist
Feb 11 13:05:58.194: INFO: namespace e2e-tests-container-lifecycle-hook-fd5tx deletion completed in 24.20177512s

• [SLOW TEST:64.786 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:05:58.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 11 13:05:58.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:05:59.457: INFO: stderr: ""
Feb 11 13:05:59.458: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 13:05:59.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:05:59.787: INFO: stderr: ""
Feb 11 13:05:59.788: INFO: stdout: "update-demo-nautilus-98hmw update-demo-nautilus-vpt8m "
Feb 11 13:05:59.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98hmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:05:59.936: INFO: stderr: ""
Feb 11 13:05:59.936: INFO: stdout: ""
Feb 11 13:05:59.936: INFO: update-demo-nautilus-98hmw is created but not running
Feb 11 13:06:04.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:06:05.137: INFO: stderr: ""
Feb 11 13:06:05.138: INFO: stdout: "update-demo-nautilus-98hmw update-demo-nautilus-vpt8m "
Feb 11 13:06:05.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98hmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:06:05.248: INFO: stderr: ""
Feb 11 13:06:05.248: INFO: stdout: ""
Feb 11 13:06:05.248: INFO: update-demo-nautilus-98hmw is created but not running
Feb 11 13:06:10.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:06:10.621: INFO: stderr: ""
Feb 11 13:06:10.621: INFO: stdout: "update-demo-nautilus-98hmw update-demo-nautilus-vpt8m "
Feb 11 13:06:10.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98hmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:06:10.756: INFO: stderr: ""
Feb 11 13:06:10.756: INFO: stdout: "true"
Feb 11 13:06:10.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98hmw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:06:10.907: INFO: stderr: ""
Feb 11 13:06:10.907: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 13:06:10.907: INFO: validating pod update-demo-nautilus-98hmw
Feb 11 13:06:10.997: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 13:06:10.997: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 13:06:10.997: INFO: update-demo-nautilus-98hmw is verified up and running
Feb 11 13:06:10.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vpt8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:06:11.139: INFO: stderr: ""
Feb 11 13:06:11.139: INFO: stdout: "true"
Feb 11 13:06:11.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vpt8m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:06:11.260: INFO: stderr: ""
Feb 11 13:06:11.260: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 13:06:11.260: INFO: validating pod update-demo-nautilus-vpt8m
Feb 11 13:06:11.280: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 13:06:11.280: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 13:06:11.280: INFO: update-demo-nautilus-vpt8m is verified up and running
STEP: using delete to clean up resources
Feb 11 13:06:11.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:06:11.415: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 13:06:11.416: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 11 13:06:11.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-9nx8s'
Feb 11 13:06:11.536: INFO: stderr: "No resources found.\n"
Feb 11 13:06:11.536: INFO: stdout: ""
Feb 11 13:06:11.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-9nx8s -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 11 13:06:11.667: INFO: stderr: ""
Feb 11 13:06:11.667: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:06:11.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9nx8s" for this suite.
Feb 11 13:06:37.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:06:37.908: INFO: namespace: e2e-tests-kubectl-9nx8s, resource: bindings, ignored listing per whitelist
Feb 11 13:06:37.928: INFO: namespace e2e-tests-kubectl-9nx8s deletion completed in 26.250911673s

• [SLOW TEST:39.733 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:06:37.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-567ffc17-4ccf-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 11 13:06:38.169: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56811080-4ccf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-projected-mzk4w" to be "success or failure"
Feb 11 13:06:38.202: INFO: Pod "pod-projected-configmaps-56811080-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.01556ms
Feb 11 13:06:40.235: INFO: Pod "pod-projected-configmaps-56811080-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065879872s
Feb 11 13:06:42.291: INFO: Pod "pod-projected-configmaps-56811080-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122488557s
Feb 11 13:06:45.181: INFO: Pod "pod-projected-configmaps-56811080-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.011683991s
Feb 11 13:06:47.196: INFO: Pod "pod-projected-configmaps-56811080-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.027380276s
Feb 11 13:06:49.221: INFO: Pod "pod-projected-configmaps-56811080-4ccf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.051854946s
STEP: Saw pod success
Feb 11 13:06:49.221: INFO: Pod "pod-projected-configmaps-56811080-4ccf-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 13:06:49.229: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-56811080-4ccf-11ea-a6e3-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 13:06:49.334: INFO: Waiting for pod pod-projected-configmaps-56811080-4ccf-11ea-a6e3-0242ac110005 to disappear
Feb 11 13:06:49.339: INFO: Pod pod-projected-configmaps-56811080-4ccf-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:06:49.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mzk4w" for this suite.
Feb 11 13:06:55.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:06:55.798: INFO: namespace: e2e-tests-projected-mzk4w, resource: bindings, ignored listing per whitelist
Feb 11 13:06:55.814: INFO: namespace e2e-tests-projected-mzk4w deletion completed in 6.465159356s

• [SLOW TEST:17.886 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:06:55.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 11 13:06:56.672: INFO: Waiting up to 5m0s for pod "pod-615428e0-4ccf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-l26m8" to be "success or failure"
Feb 11 13:06:56.724: INFO: Pod "pod-615428e0-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 51.494529ms
Feb 11 13:06:59.033: INFO: Pod "pod-615428e0-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361147938s
Feb 11 13:07:01.065: INFO: Pod "pod-615428e0-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.392853622s
Feb 11 13:07:03.376: INFO: Pod "pod-615428e0-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.704294479s
Feb 11 13:07:05.397: INFO: Pod "pod-615428e0-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724745912s
Feb 11 13:07:07.422: INFO: Pod "pod-615428e0-4ccf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.750284071s
STEP: Saw pod success
Feb 11 13:07:07.423: INFO: Pod "pod-615428e0-4ccf-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 13:07:07.430: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-615428e0-4ccf-11ea-a6e3-0242ac110005 container test-container: 
STEP: delete the pod
Feb 11 13:07:08.415: INFO: Waiting for pod pod-615428e0-4ccf-11ea-a6e3-0242ac110005 to disappear
Feb 11 13:07:08.670: INFO: Pod pod-615428e0-4ccf-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:07:08.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l26m8" for this suite.
Feb 11 13:07:16.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:07:16.827: INFO: namespace: e2e-tests-emptydir-l26m8, resource: bindings, ignored listing per whitelist
Feb 11 13:07:16.969: INFO: namespace e2e-tests-emptydir-l26m8 deletion completed in 8.282354896s

• [SLOW TEST:21.154 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:07:16.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-bxfx4
Feb 11 13:07:29.267: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-bxfx4
STEP: checking the pod's current state and verifying that restartCount is present
Feb 11 13:07:29.280: INFO: Initial restart count of pod liveness-http is 0
Feb 11 13:07:50.749: INFO: Restart count of pod e2e-tests-container-probe-bxfx4/liveness-http is now 1 (21.469223889s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:07:50.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-bxfx4" for this suite.
Feb 11 13:07:59.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:07:59.108: INFO: namespace: e2e-tests-container-probe-bxfx4, resource: bindings, ignored listing per whitelist
Feb 11 13:07:59.200: INFO: namespace e2e-tests-container-probe-bxfx4 deletion completed in 8.264327687s

• [SLOW TEST:42.232 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:07:59.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 11 13:07:59.508: INFO: Waiting up to 5m0s for pod "downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-downward-api-4xtg5" to be "success or failure"
Feb 11 13:07:59.519: INFO: Pod "downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.472708ms
Feb 11 13:08:01.689: INFO: Pod "downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179982651s
Feb 11 13:08:03.788: INFO: Pod "downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.279747432s
Feb 11 13:08:07.017: INFO: Pod "downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.508209351s
Feb 11 13:08:09.043: INFO: Pod "downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.534003603s
Feb 11 13:08:11.055: INFO: Pod "downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.546657369s
Feb 11 13:08:13.068: INFO: Pod "downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.559198908s
STEP: Saw pod success
Feb 11 13:08:13.068: INFO: Pod "downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 13:08:13.078: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb 11 13:08:14.645: INFO: Waiting for pod downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005 to disappear
Feb 11 13:08:15.137: INFO: Pod downward-api-86f8589c-4ccf-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:08:15.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4xtg5" for this suite.
Feb 11 13:08:21.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:08:21.581: INFO: namespace: e2e-tests-downward-api-4xtg5, resource: bindings, ignored listing per whitelist
Feb 11 13:08:21.597: INFO: namespace e2e-tests-downward-api-4xtg5 deletion completed in 6.442661014s

• [SLOW TEST:22.396 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:08:21.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:08:21.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-5bgrh" for this suite.
Feb 11 13:08:27.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:08:28.106: INFO: namespace: e2e-tests-kubelet-test-5bgrh, resource: bindings, ignored listing per whitelist
Feb 11 13:08:28.113: INFO: namespace e2e-tests-kubelet-test-5bgrh deletion completed in 6.228844122s

• [SLOW TEST:6.516 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:08:28.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 11 13:08:28.429: INFO: Waiting up to 5m0s for pod "pod-98341276-4ccf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-emptydir-lnqtj" to be "success or failure"
Feb 11 13:08:28.444: INFO: Pod "pod-98341276-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.937602ms
Feb 11 13:08:30.465: INFO: Pod "pod-98341276-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035913787s
Feb 11 13:08:32.502: INFO: Pod "pod-98341276-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0726271s
Feb 11 13:08:34.541: INFO: Pod "pod-98341276-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11191121s
Feb 11 13:08:36.819: INFO: Pod "pod-98341276-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.390289381s
Feb 11 13:08:38.836: INFO: Pod "pod-98341276-4ccf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.40722929s
STEP: Saw pod success
Feb 11 13:08:38.836: INFO: Pod "pod-98341276-4ccf-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 13:08:38.842: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-98341276-4ccf-11ea-a6e3-0242ac110005 container test-container: 
STEP: delete the pod
Feb 11 13:08:39.873: INFO: Waiting for pod pod-98341276-4ccf-11ea-a6e3-0242ac110005 to disappear
Feb 11 13:08:39.922: INFO: Pod pod-98341276-4ccf-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:08:39.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lnqtj" for this suite.
Feb 11 13:08:46.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:08:46.094: INFO: namespace: e2e-tests-emptydir-lnqtj, resource: bindings, ignored listing per whitelist
Feb 11 13:08:46.420: INFO: namespace e2e-tests-emptydir-lnqtj deletion completed in 6.44553438s

• [SLOW TEST:18.307 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:08:46.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb 11 13:08:47.519: INFO: created pod pod-service-account-defaultsa
Feb 11 13:08:47.519: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 11 13:08:47.641: INFO: created pod pod-service-account-mountsa
Feb 11 13:08:47.641: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 11 13:08:47.700: INFO: created pod pod-service-account-nomountsa
Feb 11 13:08:47.700: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 11 13:08:47.821: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 11 13:08:47.822: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 11 13:08:47.896: INFO: created pod pod-service-account-mountsa-mountspec
Feb 11 13:08:47.896: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 11 13:08:48.101: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 11 13:08:48.101: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 11 13:08:48.174: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 11 13:08:48.174: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 11 13:08:48.310: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 11 13:08:48.311: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 11 13:08:49.752: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 11 13:08:49.753: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:08:49.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-rtztk" for this suite.
Feb 11 13:09:19.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:09:19.675: INFO: namespace: e2e-tests-svcaccounts-rtztk, resource: bindings, ignored listing per whitelist
Feb 11 13:09:19.895: INFO: namespace e2e-tests-svcaccounts-rtztk deletion completed in 29.348285356s

• [SLOW TEST:33.475 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:09:19.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b70bb287-4ccf-11ea-a6e3-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 11 13:09:20.605: INFO: Waiting up to 5m0s for pod "pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005" in namespace "e2e-tests-secrets-sfksx" to be "success or failure"
Feb 11 13:09:20.928: INFO: Pod "pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 322.43535ms
Feb 11 13:09:23.121: INFO: Pod "pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.515165169s
Feb 11 13:09:25.139: INFO: Pod "pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.533594112s
Feb 11 13:09:27.159: INFO: Pod "pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.553007288s
Feb 11 13:09:29.432: INFO: Pod "pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.826575334s
Feb 11 13:09:31.447: INFO: Pod "pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.841787019s
Feb 11 13:09:33.463: INFO: Pod "pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.857696579s
Feb 11 13:09:35.476: INFO: Pod "pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.87043799s
STEP: Saw pod success
Feb 11 13:09:35.476: INFO: Pod "pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005" satisfied condition "success or failure"
Feb 11 13:09:35.482: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb 11 13:09:37.689: INFO: Waiting for pod pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005 to disappear
Feb 11 13:09:37.721: INFO: Pod pod-secrets-b736e924-4ccf-11ea-a6e3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:09:37.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-sfksx" for this suite.
Feb 11 13:09:44.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:09:44.151: INFO: namespace: e2e-tests-secrets-sfksx, resource: bindings, ignored listing per whitelist
Feb 11 13:09:44.193: INFO: namespace e2e-tests-secrets-sfksx deletion completed in 6.394079637s
STEP: Destroying namespace "e2e-tests-secret-namespace-z9klc" for this suite.
Feb 11 13:09:50.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:09:50.282: INFO: namespace: e2e-tests-secret-namespace-z9klc, resource: bindings, ignored listing per whitelist
Feb 11 13:09:50.429: INFO: namespace e2e-tests-secret-namespace-z9klc deletion completed in 6.23575723s

• [SLOW TEST:30.533 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 11 13:09:50.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-7w5wl
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb 11 13:09:50.785: INFO: Found 0 stateful pods, waiting for 3
Feb 11 13:10:00.849: INFO: Found 1 stateful pods, waiting for 3
Feb 11 13:10:10.899: INFO: Found 2 stateful pods, waiting for 3
Feb 11 13:10:21.082: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 13:10:21.082: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 13:10:21.082: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 11 13:10:30.828: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 13:10:30.828: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 13:10:30.828: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 13:10:30.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7w5wl ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 11 13:10:31.713: INFO: stderr: "I0211 13:10:31.224914    3907 log.go:172] (0xc0007162c0) (0xc0006df4a0) Create stream\nI0211 13:10:31.225154    3907 log.go:172] (0xc0007162c0) (0xc0006df4a0) Stream added, broadcasting: 1\nI0211 13:10:31.231502    3907 log.go:172] (0xc0007162c0) Reply frame received for 1\nI0211 13:10:31.231578    3907 log.go:172] (0xc0007162c0) (0xc000374000) Create stream\nI0211 13:10:31.231593    3907 log.go:172] (0xc0007162c0) (0xc000374000) Stream added, broadcasting: 3\nI0211 13:10:31.232402    3907 log.go:172] (0xc0007162c0) Reply frame received for 3\nI0211 13:10:31.232420    3907 log.go:172] (0xc0007162c0) (0xc0006df540) Create stream\nI0211 13:10:31.232426    3907 log.go:172] (0xc0007162c0) (0xc0006df540) Stream added, broadcasting: 5\nI0211 13:10:31.233196    3907 log.go:172] (0xc0007162c0) Reply frame received for 5\nI0211 13:10:31.531005    3907 log.go:172] (0xc0007162c0) Data frame received for 3\nI0211 13:10:31.531079    3907 log.go:172] (0xc000374000) (3) Data frame handling\nI0211 13:10:31.531102    3907 log.go:172] (0xc000374000) (3) Data frame sent\nI0211 13:10:31.697655    3907 log.go:172] (0xc0007162c0) (0xc000374000) Stream removed, broadcasting: 3\nI0211 13:10:31.697818    3907 log.go:172] (0xc0007162c0) (0xc0006df540) Stream removed, broadcasting: 5\nI0211 13:10:31.697856    3907 log.go:172] (0xc0007162c0) Data frame received for 1\nI0211 13:10:31.697874    3907 log.go:172] (0xc0006df4a0) (1) Data frame handling\nI0211 13:10:31.697909    3907 log.go:172] (0xc0006df4a0) (1) Data frame sent\nI0211 13:10:31.697936    3907 log.go:172] (0xc0007162c0) (0xc0006df4a0) Stream removed, broadcasting: 1\nI0211 13:10:31.697956    3907 log.go:172] (0xc0007162c0) Go away received\nI0211 13:10:31.698524    3907 log.go:172] (0xc0007162c0) (0xc0006df4a0) Stream removed, broadcasting: 1\nI0211 13:10:31.698563    3907 log.go:172] (0xc0007162c0) (0xc000374000) Stream removed, broadcasting: 3\nI0211 13:10:31.698577    3907 log.go:172] (0xc0007162c0) (0xc0006df540) Stream removed, broadcasting: 5\n"
Feb 11 13:10:31.714: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 11 13:10:31.714: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 11 13:10:41.850: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 11 13:10:51.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7w5wl ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 13:10:52.811: INFO: stderr: "I0211 13:10:52.224650    3929 log.go:172] (0xc0001546e0) (0xc00075e640) Create stream\nI0211 13:10:52.224797    3929 log.go:172] (0xc0001546e0) (0xc00075e640) Stream added, broadcasting: 1\nI0211 13:10:52.232081    3929 log.go:172] (0xc0001546e0) Reply frame received for 1\nI0211 13:10:52.232143    3929 log.go:172] (0xc0001546e0) (0xc000612d20) Create stream\nI0211 13:10:52.232154    3929 log.go:172] (0xc0001546e0) (0xc000612d20) Stream added, broadcasting: 3\nI0211 13:10:52.233170    3929 log.go:172] (0xc0001546e0) Reply frame received for 3\nI0211 13:10:52.233241    3929 log.go:172] (0xc0001546e0) (0xc000228000) Create stream\nI0211 13:10:52.233276    3929 log.go:172] (0xc0001546e0) (0xc000228000) Stream added, broadcasting: 5\nI0211 13:10:52.234541    3929 log.go:172] (0xc0001546e0) Reply frame received for 5\nI0211 13:10:52.384730    3929 log.go:172] (0xc0001546e0) Data frame received for 3\nI0211 13:10:52.384822    3929 log.go:172] (0xc000612d20) (3) Data frame handling\nI0211 13:10:52.384877    3929 log.go:172] (0xc000612d20) (3) Data frame sent\nI0211 13:10:52.785948    3929 log.go:172] (0xc0001546e0) Data frame received for 1\nI0211 13:10:52.786092    3929 log.go:172] (0xc00075e640) (1) Data frame handling\nI0211 13:10:52.786131    3929 log.go:172] (0xc00075e640) (1) Data frame sent\nI0211 13:10:52.787442    3929 log.go:172] (0xc0001546e0) (0xc00075e640) Stream removed, broadcasting: 1\nI0211 13:10:52.787520    3929 log.go:172] (0xc0001546e0) (0xc000612d20) Stream removed, broadcasting: 3\nI0211 13:10:52.787607    3929 log.go:172] (0xc0001546e0) (0xc000228000) Stream removed, broadcasting: 5\nI0211 13:10:52.787959    3929 log.go:172] (0xc0001546e0) Go away received\nI0211 13:10:52.788257    3929 log.go:172] (0xc0001546e0) (0xc00075e640) Stream removed, broadcasting: 1\nI0211 13:10:52.788310    3929 log.go:172] (0xc0001546e0) (0xc000612d20) Stream removed, broadcasting: 3\nI0211 13:10:52.788431    3929 log.go:172] (0xc0001546e0) (0xc000228000) Stream removed, broadcasting: 5\n"
Feb 11 13:10:52.812: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 11 13:10:52.812: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 11 13:11:02.892: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
Feb 11 13:11:02.893: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 13:11:02.893: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 13:11:02.893: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 13:11:12.971: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
Feb 11 13:11:12.971: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 13:11:12.971: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 13:11:22.914: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
Feb 11 13:11:22.914: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 13:11:22.914: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 13:11:33.001: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
Feb 11 13:11:33.002: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 13:11:42.997: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
Feb 11 13:11:42.998: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 13:11:53.397: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 11 13:12:02.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7w5wl ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 11 13:12:04.139: INFO: stderr: "I0211 13:12:03.195719    3951 log.go:172] (0xc0006b6370) (0xc000635400) Create stream\nI0211 13:12:03.197077    3951 log.go:172] (0xc0006b6370) (0xc000635400) Stream added, broadcasting: 1\nI0211 13:12:03.205260    3951 log.go:172] (0xc0006b6370) Reply frame received for 1\nI0211 13:12:03.205317    3951 log.go:172] (0xc0006b6370) (0xc0006354a0) Create stream\nI0211 13:12:03.205345    3951 log.go:172] (0xc0006b6370) (0xc0006354a0) Stream added, broadcasting: 3\nI0211 13:12:03.206457    3951 log.go:172] (0xc0006b6370) Reply frame received for 3\nI0211 13:12:03.206482    3951 log.go:172] (0xc0006b6370) (0xc000736000) Create stream\nI0211 13:12:03.206516    3951 log.go:172] (0xc0006b6370) (0xc000736000) Stream added, broadcasting: 5\nI0211 13:12:03.207267    3951 log.go:172] (0xc0006b6370) Reply frame received for 5\nI0211 13:12:03.609409    3951 log.go:172] (0xc0006b6370) Data frame received for 3\nI0211 13:12:03.609478    3951 log.go:172] (0xc0006354a0) (3) Data frame handling\nI0211 13:12:03.609573    3951 log.go:172] (0xc0006354a0) (3) Data frame sent\nI0211 13:12:04.124892    3951 log.go:172] (0xc0006b6370) (0xc000736000) Stream removed, broadcasting: 5\nI0211 13:12:04.124984    3951 log.go:172] (0xc0006b6370) Data frame received for 1\nI0211 13:12:04.125006    3951 log.go:172] (0xc000635400) (1) Data frame handling\nI0211 13:12:04.125022    3951 log.go:172] (0xc0006b6370) (0xc0006354a0) Stream removed, broadcasting: 3\nI0211 13:12:04.125057    3951 log.go:172] (0xc000635400) (1) Data frame sent\nI0211 13:12:04.125074    3951 log.go:172] (0xc0006b6370) (0xc000635400) Stream removed, broadcasting: 1\nI0211 13:12:04.125102    3951 log.go:172] (0xc0006b6370) Go away received\nI0211 13:12:04.125521    3951 log.go:172] (0xc0006b6370) (0xc000635400) Stream removed, broadcasting: 1\nI0211 13:12:04.125535    3951 log.go:172] (0xc0006b6370) (0xc0006354a0) Stream removed, broadcasting: 3\nI0211 13:12:04.125544    3951 log.go:172] (0xc0006b6370) (0xc000736000) Stream removed, broadcasting: 5\n"
Feb 11 13:12:04.139: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 11 13:12:04.139: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 11 13:12:14.229: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 11 13:12:24.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7w5wl ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 13:12:25.271: INFO: stderr: "I0211 13:12:24.757873    3972 log.go:172] (0xc0006dc370) (0xc0006fa640) Create stream\nI0211 13:12:24.758639    3972 log.go:172] (0xc0006dc370) (0xc0006fa640) Stream added, broadcasting: 1\nI0211 13:12:24.801479    3972 log.go:172] (0xc0006dc370) Reply frame received for 1\nI0211 13:12:24.801668    3972 log.go:172] (0xc0006dc370) (0xc0005a8be0) Create stream\nI0211 13:12:24.801687    3972 log.go:172] (0xc0006dc370) (0xc0005a8be0) Stream added, broadcasting: 3\nI0211 13:12:24.804303    3972 log.go:172] (0xc0006dc370) Reply frame received for 3\nI0211 13:12:24.804343    3972 log.go:172] (0xc0006dc370) (0xc0002d4000) Create stream\nI0211 13:12:24.804370    3972 log.go:172] (0xc0006dc370) (0xc0002d4000) Stream added, broadcasting: 5\nI0211 13:12:24.806900    3972 log.go:172] (0xc0006dc370) Reply frame received for 5\nI0211 13:12:25.093196    3972 log.go:172] (0xc0006dc370) Data frame received for 3\nI0211 13:12:25.093588    3972 log.go:172] (0xc0005a8be0) (3) Data frame handling\nI0211 13:12:25.093638    3972 log.go:172] (0xc0005a8be0) (3) Data frame sent\nI0211 13:12:25.256040    3972 log.go:172] (0xc0006dc370) (0xc0005a8be0) Stream removed, broadcasting: 3\nI0211 13:12:25.257072    3972 log.go:172] (0xc0006dc370) Data frame received for 1\nI0211 13:12:25.257095    3972 log.go:172] (0xc0006fa640) (1) Data frame handling\nI0211 13:12:25.257127    3972 log.go:172] (0xc0006fa640) (1) Data frame sent\nI0211 13:12:25.257221    3972 log.go:172] (0xc0006dc370) (0xc0002d4000) Stream removed, broadcasting: 5\nI0211 13:12:25.257268    3972 log.go:172] (0xc0006dc370) (0xc0006fa640) Stream removed, broadcasting: 1\nI0211 13:12:25.257289    3972 log.go:172] (0xc0006dc370) Go away received\nI0211 13:12:25.257574    3972 log.go:172] (0xc0006dc370) (0xc0006fa640) Stream removed, broadcasting: 1\nI0211 13:12:25.257595    3972 log.go:172] (0xc0006dc370) (0xc0005a8be0) Stream removed, broadcasting: 3\nI0211 13:12:25.257604    3972 log.go:172] (0xc0006dc370) (0xc0002d4000) Stream removed, broadcasting: 5\n"
Feb 11 13:12:25.271: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 11 13:12:25.271: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 11 13:12:35.565: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
Feb 11 13:12:35.565: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 11 13:12:35.565: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 11 13:12:45.652: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
Feb 11 13:12:45.653: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 11 13:12:45.653: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 11 13:12:55.583: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
Feb 11 13:12:55.583: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 11 13:12:55.583: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 11 13:13:06.036: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
Feb 11 13:13:06.037: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 11 13:13:15.614: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
Feb 11 13:13:15.615: INFO: Waiting for Pod e2e-tests-statefulset-7w5wl/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 11 13:13:25.899: INFO: Waiting for StatefulSet e2e-tests-statefulset-7w5wl/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 11 13:13:35.588: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7w5wl
Feb 11 13:13:35.599: INFO: Scaling statefulset ss2 to 0
Feb 11 13:14:05.653: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 13:14:05.667: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 11 13:14:05.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-7w5wl" for this suite.
Feb 11 13:14:15.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:14:15.957: INFO: namespace: e2e-tests-statefulset-7w5wl, resource: bindings, ignored listing per whitelist
Feb 11 13:14:16.018: INFO: namespace e2e-tests-statefulset-7w5wl deletion completed in 10.281749715s

• [SLOW TEST:265.589 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSFeb 11 13:14:16.020: INFO: Running AfterSuite actions on all nodes
Feb 11 13:14:16.020: INFO: Running AfterSuite actions on node 1
Feb 11 13:14:16.020: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8811.524 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS