I0428 10:47:16.433974 6 e2e.go:224] Starting e2e run "a00d4fe0-893d-11ea-80e8-0242ac11000f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588070835 - Will randomize all specs Will run 201 of 2164 specs Apr 28 10:47:16.618: INFO: >>> kubeConfig: /root/.kube/config Apr 28 10:47:16.621: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 28 10:47:16.637: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 28 10:47:16.663: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 28 10:47:16.663: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 28 10:47:16.663: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 28 10:47:16.672: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 28 10:47:16.672: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 28 10:47:16.672: INFO: e2e test version: v1.13.12 Apr 28 10:47:16.673: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:47:16.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy Apr 28 10:47:16.755: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 10:47:16.762: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.292722ms) Apr 28 10:47:16.764: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.212462ms) Apr 28 10:47:16.799: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 34.783246ms) Apr 28 10:47:16.802: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.100026ms) Apr 28 10:47:16.805: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.180557ms) Apr 28 10:47:16.807: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.06535ms) Apr 28 10:47:16.810: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.701649ms) Apr 28 10:47:16.812: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.382828ms) Apr 28 10:47:16.815: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.759322ms) Apr 28 10:47:16.818: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.04434ms) Apr 28 10:47:16.821: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.142775ms) Apr 28 10:47:16.824: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.903918ms) Apr 28 10:47:16.827: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.536081ms) Apr 28 10:47:16.829: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.514621ms) Apr 28 10:47:16.832: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.367978ms) Apr 28 10:47:16.834: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.442439ms) Apr 28 10:47:16.837: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.805925ms) Apr 28 10:47:16.840: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.787569ms) Apr 28 10:47:16.843: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.912177ms) Apr 28 10:47:16.846: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.253292ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:47:16.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-dwxd4" for this suite. Apr 28 10:47:22.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:47:22.906: INFO: namespace: e2e-tests-proxy-dwxd4, resource: bindings, ignored listing per whitelist Apr 28 10:47:22.930: INFO: namespace e2e-tests-proxy-dwxd4 deletion completed in 6.080102933s • [SLOW TEST:6.256 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:47:22.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Apr 28 10:47:23.002: INFO: PodSpec: initContainers in spec.initContainers Apr 28 10:48:11.932: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a4421772-893d-11ea-80e8-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-init-container-29q6w", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-29q6w/pods/pod-init-a4421772-893d-11ea-80e8-0242ac11000f", UID:"a444799e-893d-11ea-99e8-0242ac110002", ResourceVersion:"7630506", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723667643, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"2666629"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gd8x7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000e61440), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gd8x7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gd8x7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gd8x7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000e674a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000fd2240), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e67530)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e67550)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000e67558), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000e6755c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723667643, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723667643, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723667643, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723667643, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.211", StartTime:(*v1.Time)(0xc0014030e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0006a2700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0006a2770)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://81eea560d9feef20b740ec3d4eff0f9f54af8def1b20924553bd24caad4cc004"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001403120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001403100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:48:11.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-29q6w" for this suite. Apr 28 10:48:33.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:48:34.070: INFO: namespace: e2e-tests-init-container-29q6w, resource: bindings, ignored listing per whitelist Apr 28 10:48:34.085: INFO: namespace e2e-tests-init-container-29q6w deletion completed in 22.143515834s • [SLOW TEST:71.156 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:48:34.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Apr 28 10:48:34.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-frw95' Apr 28 10:48:36.397: INFO: stderr: "" Apr 28 10:48:36.397: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 10:48:36.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-frw95' Apr 28 10:48:36.629: INFO: stderr: "" Apr 28 10:48:36.629: INFO: stdout: "update-demo-nautilus-27f8m update-demo-nautilus-vwdfc " Apr 28 10:48:36.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27f8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-frw95' Apr 28 10:48:36.735: INFO: stderr: "" Apr 28 10:48:36.735: INFO: stdout: "" Apr 28 10:48:36.735: INFO: update-demo-nautilus-27f8m is created but not running Apr 28 10:48:41.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-frw95' Apr 28 10:48:41.837: INFO: stderr: "" Apr 28 10:48:41.837: INFO: stdout: "update-demo-nautilus-27f8m update-demo-nautilus-vwdfc " Apr 28 10:48:41.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27f8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-frw95' Apr 28 10:48:41.938: INFO: stderr: "" Apr 28 10:48:41.938: INFO: stdout: "true" Apr 28 10:48:41.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27f8m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-frw95' Apr 28 10:48:42.035: INFO: stderr: "" Apr 28 10:48:42.035: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 10:48:42.035: INFO: validating pod update-demo-nautilus-27f8m Apr 28 10:48:42.038: INFO: got data: { "image": "nautilus.jpg" } Apr 28 10:48:42.039: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 10:48:42.039: INFO: update-demo-nautilus-27f8m is verified up and running Apr 28 10:48:42.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwdfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-frw95' Apr 28 10:48:42.130: INFO: stderr: "" Apr 28 10:48:42.130: INFO: stdout: "true" Apr 28 10:48:42.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwdfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-frw95' Apr 28 10:48:42.230: INFO: stderr: "" Apr 28 10:48:42.230: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 10:48:42.230: INFO: validating pod update-demo-nautilus-vwdfc Apr 28 10:48:42.234: INFO: got data: { "image": "nautilus.jpg" } Apr 28 10:48:42.234: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 10:48:42.234: INFO: update-demo-nautilus-vwdfc is verified up and running STEP: rolling-update to new replication controller Apr 28 10:48:42.236: INFO: scanned /root for discovery docs: Apr 28 10:48:42.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-frw95' Apr 28 10:49:04.827: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 28 10:49:04.827: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 10:49:04.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-frw95' Apr 28 10:49:04.927: INFO: stderr: "" Apr 28 10:49:04.927: INFO: stdout: "update-demo-kitten-lkz9m update-demo-kitten-wqbrt " Apr 28 10:49:04.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lkz9m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-frw95' Apr 28 10:49:05.031: INFO: stderr: "" Apr 28 10:49:05.031: INFO: stdout: "true" Apr 28 10:49:05.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lkz9m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-frw95' Apr 28 10:49:05.122: INFO: stderr: "" Apr 28 10:49:05.123: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 28 10:49:05.123: INFO: validating pod update-demo-kitten-lkz9m Apr 28 10:49:05.126: INFO: got data: { "image": "kitten.jpg" } Apr 28 10:49:05.126: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 28 10:49:05.126: INFO: update-demo-kitten-lkz9m is verified up and running Apr 28 10:49:05.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wqbrt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-frw95' Apr 28 10:49:05.224: INFO: stderr: "" Apr 28 10:49:05.224: INFO: stdout: "true" Apr 28 10:49:05.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wqbrt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-frw95' Apr 28 10:49:05.333: INFO: stderr: "" Apr 28 10:49:05.333: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 28 10:49:05.333: INFO: validating pod update-demo-kitten-wqbrt Apr 28 10:49:05.338: INFO: got data: { "image": "kitten.jpg" } Apr 28 10:49:05.338: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 28 10:49:05.338: INFO: update-demo-kitten-wqbrt is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:49:05.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-frw95" for this suite. Apr 28 10:49:29.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:49:29.398: INFO: namespace: e2e-tests-kubectl-frw95, resource: bindings, ignored listing per whitelist Apr 28 10:49:29.434: INFO: namespace e2e-tests-kubectl-frw95 deletion completed in 24.092745358s • [SLOW TEST:55.348 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:49:29.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 10:49:29.609: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Apr 28 10:49:29.617: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rz57q/daemonsets","resourceVersion":"7630803"},"items":null} Apr 28 10:49:29.620: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rz57q/pods","resourceVersion":"7630803"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:49:29.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-rz57q" for this suite. Apr 28 10:49:35.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:49:35.651: INFO: namespace: e2e-tests-daemonsets-rz57q, resource: bindings, ignored listing per whitelist Apr 28 10:49:35.715: INFO: namespace e2e-tests-daemonsets-rz57q deletion completed in 6.086158658s S [SKIPPING] [6.280 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 10:49:29.609: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:49:35.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 10:49:35.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-j8pcm' Apr 28 10:49:35.960: INFO: stderr: "" Apr 28 10:49:35.960: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Apr 28 10:49:35.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-j8pcm' Apr 28 10:49:41.728: INFO: stderr: "" Apr 28 10:49:41.729: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:49:41.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j8pcm" for this suite. Apr 28 10:49:47.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:49:47.752: INFO: namespace: e2e-tests-kubectl-j8pcm, resource: bindings, ignored listing per whitelist Apr 28 10:49:47.818: INFO: namespace e2e-tests-kubectl-j8pcm deletion completed in 6.086660732s • [SLOW TEST:12.104 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:49:47.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 28 10:49:58.001: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kl9wg PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:49:58.001: INFO: >>> kubeConfig: /root/.kube/config I0428 10:49:58.033547 6 log.go:172] (0xc001e3c2c0) (0xc001f40aa0) Create stream I0428 10:49:58.033578 6 log.go:172] (0xc001e3c2c0) (0xc001f40aa0) Stream added, broadcasting: 1 I0428 10:49:58.039096 6 log.go:172] (0xc001e3c2c0) Reply frame received for 1 I0428 10:49:58.039157 6 log.go:172] (0xc001e3c2c0) (0xc000a34fa0) Create stream I0428 10:49:58.039194 6 log.go:172] (0xc001e3c2c0) (0xc000a34fa0) Stream added, broadcasting: 3 I0428 10:49:58.040520 6 log.go:172] (0xc001e3c2c0) Reply frame received for 3 I0428 10:49:58.040544 6 log.go:172] (0xc001e3c2c0) (0xc0007400a0) Create stream I0428 10:49:58.040566 6 log.go:172] (0xc001e3c2c0) (0xc0007400a0) Stream added, broadcasting: 5 I0428 10:49:58.041418 6 log.go:172] (0xc001e3c2c0) Reply frame received for 5 I0428 10:49:58.114177 6 log.go:172] (0xc001e3c2c0) Data frame received for 3 I0428 10:49:58.114214 6 log.go:172] (0xc000a34fa0) (3) Data frame handling I0428 10:49:58.114230 6 log.go:172] (0xc000a34fa0) (3) Data frame sent I0428 10:49:58.114266 6 log.go:172] (0xc001e3c2c0) Data frame received for 3 I0428 10:49:58.114296 6 log.go:172] (0xc000a34fa0) (3) Data frame handling I0428 10:49:58.114319 6 log.go:172] (0xc001e3c2c0) Data frame received for 5 I0428 10:49:58.114332 6 log.go:172] (0xc0007400a0) (5) Data frame handling I0428 10:49:58.116071 6 log.go:172] (0xc001e3c2c0) Data frame received for 1 I0428 10:49:58.116104 6 log.go:172] (0xc001f40aa0) (1) Data frame handling I0428 10:49:58.116121 6 log.go:172] (0xc001f40aa0) (1) Data frame sent I0428 10:49:58.116158 6 log.go:172] (0xc001e3c2c0) (0xc001f40aa0) Stream removed, broadcasting: 1 I0428 10:49:58.116196 6 log.go:172] (0xc001e3c2c0) Go away received I0428 10:49:58.116464 6 log.go:172] (0xc001e3c2c0) (0xc001f40aa0) Stream removed, broadcasting: 1 I0428 10:49:58.116500 6 log.go:172] (0xc001e3c2c0) (0xc000a34fa0) Stream removed, broadcasting: 3 I0428 10:49:58.116576 6 log.go:172] (0xc001e3c2c0) (0xc0007400a0) Stream removed, broadcasting: 5 Apr 28 10:49:58.116: INFO: Exec stderr: "" Apr 28 10:49:58.116: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kl9wg PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:49:58.116: INFO: >>> kubeConfig: /root/.kube/config I0428 10:49:58.162805 6 log.go:172] (0xc0000eadc0) (0xc0010c21e0) Create stream I0428 10:49:58.162843 6 log.go:172] (0xc0000eadc0) (0xc0010c21e0) Stream added, broadcasting: 1 I0428 10:49:58.165205 6 log.go:172] (0xc0000eadc0) Reply frame received for 1 I0428 10:49:58.165292 6 log.go:172] (0xc0000eadc0) (0xc0017e2000) Create stream I0428 10:49:58.165313 6 log.go:172] (0xc0000eadc0) (0xc0017e2000) Stream added, broadcasting: 3 I0428 10:49:58.166351 6 log.go:172] (0xc0000eadc0) Reply frame received for 3 I0428 10:49:58.166401 6 log.go:172] (0xc0000eadc0) (0xc0010c2280) Create stream I0428 10:49:58.166417 6 log.go:172] (0xc0000eadc0) (0xc0010c2280) Stream added, broadcasting: 5 I0428 10:49:58.167231 6 log.go:172] (0xc0000eadc0) Reply frame received for 5 I0428 10:49:58.229090 6 log.go:172] (0xc0000eadc0) Data frame received for 5 I0428 10:49:58.229290 6 log.go:172] (0xc0010c2280) (5) Data frame handling I0428 10:49:58.229337 6 log.go:172] (0xc0000eadc0) Data frame received for 3 I0428 10:49:58.229360 6 log.go:172] (0xc0017e2000) (3) Data frame handling I0428 10:49:58.229390 6 log.go:172] (0xc0017e2000) (3) Data frame sent I0428 10:49:58.229412 6 log.go:172] (0xc0000eadc0) Data frame received for 3 I0428 10:49:58.229431 6 log.go:172] (0xc0017e2000) (3) Data frame handling I0428 10:49:58.230714 6 log.go:172] (0xc0000eadc0) Data frame received for 1 I0428 10:49:58.230754 6 log.go:172] (0xc0010c21e0) (1) Data frame handling I0428 10:49:58.230775 6 log.go:172] (0xc0010c21e0) (1) Data frame sent I0428 10:49:58.230808 6 log.go:172] (0xc0000eadc0) (0xc0010c21e0) Stream removed, broadcasting: 1 I0428 10:49:58.230830 6 log.go:172] (0xc0000eadc0) Go away received I0428 10:49:58.230968 6 log.go:172] (0xc0000eadc0) (0xc0010c21e0) Stream removed, broadcasting: 1 I0428 10:49:58.230992 6 log.go:172] (0xc0000eadc0) (0xc0017e2000) Stream removed, broadcasting: 3 I0428 10:49:58.231006 6 log.go:172] (0xc0000eadc0) (0xc0010c2280) Stream removed, broadcasting: 5 Apr 28 10:49:58.231: INFO: Exec stderr: "" Apr 28 10:49:58.231: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kl9wg PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:49:58.231: INFO: >>> kubeConfig: /root/.kube/config I0428 10:49:58.265623 6 log.go:172] (0xc000c6e630) (0xc0018b61e0) Create stream I0428 10:49:58.265654 6 log.go:172] (0xc000c6e630) (0xc0018b61e0) Stream added, broadcasting: 1 I0428 10:49:58.268581 6 log.go:172] (0xc000c6e630) Reply frame received for 1 I0428 10:49:58.268626 6 log.go:172] (0xc000c6e630) (0xc000a16000) Create stream I0428 10:49:58.268642 6 log.go:172] (0xc000c6e630) (0xc000a16000) Stream added, broadcasting: 3 I0428 10:49:58.269741 6 log.go:172] (0xc000c6e630) Reply frame received for 3 I0428 10:49:58.269784 6 log.go:172] (0xc000c6e630) (0xc0007406e0) Create stream I0428 10:49:58.269808 6 log.go:172] (0xc000c6e630) (0xc0007406e0) Stream added, broadcasting: 5 I0428 10:49:58.270751 6 log.go:172] (0xc000c6e630) Reply frame received for 5 I0428 10:49:58.333868 6 log.go:172] (0xc000c6e630) Data frame received for 3 I0428 10:49:58.333898 6 log.go:172] (0xc000a16000) (3) Data frame handling I0428 10:49:58.333906 6 log.go:172] (0xc000a16000) (3) Data frame sent I0428 10:49:58.333910 6 log.go:172] (0xc000c6e630) Data frame received for 3 I0428 10:49:58.333914 6 log.go:172] (0xc000a16000) (3) Data frame handling I0428 10:49:58.333939 6 log.go:172] (0xc000c6e630) Data frame received for 5 I0428 10:49:58.333970 6 log.go:172] (0xc0007406e0) (5) Data frame handling I0428 10:49:58.335788 6 log.go:172] (0xc000c6e630) Data frame received for 1 I0428 10:49:58.335833 6 log.go:172] (0xc0018b61e0) (1) Data frame handling I0428 10:49:58.335876 6 log.go:172] (0xc0018b61e0) (1) Data frame sent I0428 10:49:58.335900 6 log.go:172] (0xc000c6e630) (0xc0018b61e0) Stream removed, broadcasting: 1 I0428 10:49:58.335928 6 log.go:172] (0xc000c6e630) Go away received I0428 10:49:58.336001 6 log.go:172] (0xc000c6e630) (0xc0018b61e0) Stream removed, broadcasting: 1 I0428 10:49:58.336016 6 log.go:172] (0xc000c6e630) (0xc000a16000) Stream removed, broadcasting: 3 I0428 10:49:58.336023 6 log.go:172] (0xc000c6e630) (0xc0007406e0) Stream removed, broadcasting: 5 Apr 28 10:49:58.336: INFO: Exec stderr: "" Apr 28 10:49:58.336: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kl9wg PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:49:58.336: INFO: >>> kubeConfig: /root/.kube/config I0428 10:49:58.371564 6 log.go:172] (0xc0000eb290) (0xc0010c2500) Create stream I0428 10:49:58.371600 6 log.go:172] (0xc0000eb290) (0xc0010c2500) Stream added, broadcasting: 1 I0428 10:49:58.374726 6 log.go:172] (0xc0000eb290) Reply frame received for 1 I0428 10:49:58.374767 6 log.go:172] (0xc0000eb290) (0xc0018b6280) Create stream I0428 10:49:58.374779 6 log.go:172] (0xc0000eb290) (0xc0018b6280) Stream added, broadcasting: 3 I0428 10:49:58.375695 6 log.go:172] (0xc0000eb290) Reply frame received for 3 I0428 10:49:58.375731 6 log.go:172] (0xc0000eb290) (0xc000a16140) Create stream I0428 10:49:58.375745 6 log.go:172] (0xc0000eb290) (0xc000a16140) Stream added, broadcasting: 5 I0428 10:49:58.376612 6 log.go:172] (0xc0000eb290) Reply frame received for 5 I0428 10:49:58.433682 6 log.go:172] (0xc0000eb290) Data frame received for 5 I0428 10:49:58.433741 6 log.go:172] (0xc000a16140) (5) Data frame handling I0428 10:49:58.433789 6 log.go:172] (0xc0000eb290) Data frame received for 3 I0428 10:49:58.433826 6 log.go:172] (0xc0018b6280) (3) Data frame handling I0428 10:49:58.433865 6 log.go:172] (0xc0018b6280) (3) Data frame sent I0428 10:49:58.434092 6 log.go:172] (0xc0000eb290) Data frame received for 3 I0428 10:49:58.434119 6 log.go:172] (0xc0018b6280) (3) Data frame handling I0428 10:49:58.435834 6 log.go:172] (0xc0000eb290) Data frame received for 1 I0428 10:49:58.435871 6 log.go:172] (0xc0010c2500) (1) Data frame handling I0428 10:49:58.435894 6 log.go:172] (0xc0010c2500) (1) Data frame sent I0428 10:49:58.435926 6 log.go:172] (0xc0000eb290) (0xc0010c2500) Stream removed, broadcasting: 1 I0428 10:49:58.435950 6 log.go:172] (0xc0000eb290) Go away received I0428 10:49:58.436067 6 log.go:172] (0xc0000eb290) (0xc0010c2500) Stream removed, broadcasting: 1 I0428 10:49:58.436089 6 log.go:172] (0xc0000eb290) (0xc0018b6280) Stream removed, broadcasting: 3 I0428 10:49:58.436118 6 log.go:172] (0xc0000eb290) (0xc000a16140) Stream removed, broadcasting: 5 Apr 28 10:49:58.436: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 28 10:49:58.436: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kl9wg PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:49:58.436: INFO: >>> kubeConfig: /root/.kube/config I0428 10:49:58.471441 6 log.go:172] (0xc000c6eb00) (0xc0018b6500) Create stream I0428 10:49:58.471468 6 log.go:172] (0xc000c6eb00) (0xc0018b6500) Stream added, broadcasting: 1 I0428 10:49:58.474492 6 log.go:172] (0xc000c6eb00) Reply frame received for 1 I0428 10:49:58.474543 6 log.go:172] (0xc000c6eb00) (0xc0018b65a0) Create stream I0428 10:49:58.474560 6 log.go:172] (0xc000c6eb00) (0xc0018b65a0) Stream added, broadcasting: 3 I0428 10:49:58.475765 6 log.go:172] (0xc000c6eb00) Reply frame received for 3 I0428 10:49:58.475799 6 log.go:172] (0xc000c6eb00) (0xc0017e20a0) Create stream I0428 10:49:58.475811 6 log.go:172] (0xc000c6eb00) (0xc0017e20a0) Stream added, broadcasting: 5 I0428 10:49:58.476740 6 log.go:172] (0xc000c6eb00) Reply frame received for 5 I0428 10:49:58.539270 6 log.go:172] (0xc000c6eb00) Data frame received for 5 I0428 10:49:58.539324 6 log.go:172] (0xc0017e20a0) (5) Data frame handling I0428 10:49:58.539364 6 log.go:172] (0xc000c6eb00) Data frame received for 3 I0428 10:49:58.539385 6 log.go:172] (0xc0018b65a0) (3) Data frame handling I0428 10:49:58.539414 6 log.go:172] (0xc0018b65a0) (3) Data frame sent I0428 10:49:58.539435 6 log.go:172] (0xc000c6eb00) Data frame received for 3 I0428 10:49:58.539449 6 log.go:172] (0xc0018b65a0) (3) Data frame handling I0428 10:49:58.540851 6 log.go:172] (0xc000c6eb00) Data frame received for 1 I0428 10:49:58.540876 6 log.go:172] (0xc0018b6500) (1) Data frame handling I0428 10:49:58.540899 6 log.go:172] (0xc0018b6500) (1) Data frame sent I0428 10:49:58.540921 6 log.go:172] (0xc000c6eb00) (0xc0018b6500) Stream removed, broadcasting: 1 I0428 10:49:58.540943 6 log.go:172] (0xc000c6eb00) Go away received I0428 10:49:58.541281 6 log.go:172] (0xc000c6eb00) (0xc0018b6500) Stream removed, broadcasting: 1 I0428 10:49:58.541323 6 log.go:172] (0xc000c6eb00) (0xc0018b65a0) Stream removed, broadcasting: 3 I0428 10:49:58.541348 6 log.go:172] (0xc000c6eb00) (0xc0017e20a0) Stream removed, broadcasting: 5 Apr 28 10:49:58.541: INFO: Exec stderr: "" Apr 28 10:49:58.541: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kl9wg PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:49:58.541: INFO: >>> kubeConfig: /root/.kube/config I0428 10:49:58.578002 6 log.go:172] (0xc001e3c580) (0xc0017e2320) Create stream I0428 10:49:58.578030 6 log.go:172] (0xc001e3c580) (0xc0017e2320) Stream added, broadcasting: 1 I0428 10:49:58.580811 6 log.go:172] (0xc001e3c580) Reply frame received for 1 I0428 10:49:58.580878 6 log.go:172] (0xc001e3c580) (0xc0018b6640) Create stream I0428 10:49:58.580914 6 log.go:172] (0xc001e3c580) (0xc0018b6640) Stream added, broadcasting: 3 I0428 10:49:58.582195 6 log.go:172] (0xc001e3c580) Reply frame received for 3 I0428 10:49:58.582249 6 log.go:172] (0xc001e3c580) (0xc0010c25a0) Create stream I0428 10:49:58.582265 6 log.go:172] (0xc001e3c580) (0xc0010c25a0) Stream added, broadcasting: 5 I0428 10:49:58.583215 6 log.go:172] (0xc001e3c580) Reply frame received for 5 I0428 10:49:58.648664 6 log.go:172] (0xc001e3c580) Data frame received for 5 I0428 10:49:58.648723 6 log.go:172] (0xc0010c25a0) (5) Data frame handling I0428 10:49:58.648765 6 log.go:172] (0xc001e3c580) Data frame received for 3 I0428 10:49:58.648781 6 log.go:172] (0xc0018b6640) (3) Data frame handling I0428 10:49:58.648803 6 log.go:172] (0xc0018b6640) (3) Data frame sent I0428 10:49:58.648821 6 log.go:172] (0xc001e3c580) Data frame received for 3 I0428 10:49:58.648831 6 log.go:172] (0xc0018b6640) (3) Data frame handling I0428 10:49:58.650112 6 log.go:172] (0xc001e3c580) Data frame received for 1 I0428 10:49:58.650137 6 log.go:172] (0xc0017e2320) (1) Data frame handling I0428 10:49:58.650163 6 log.go:172] (0xc0017e2320) (1) Data frame sent I0428 10:49:58.650200 6 log.go:172] (0xc001e3c580) (0xc0017e2320) Stream removed, broadcasting: 1 I0428 10:49:58.650231 6 log.go:172] (0xc001e3c580) Go away received I0428 10:49:58.650358 6 log.go:172] (0xc001e3c580) (0xc0017e2320) Stream removed, broadcasting: 1 I0428 10:49:58.650412 6 log.go:172] (0xc001e3c580) (0xc0018b6640) Stream removed, broadcasting: 3 I0428 10:49:58.650443 6 log.go:172] (0xc001e3c580) (0xc0010c25a0) Stream removed, broadcasting: 5 Apr 28 10:49:58.650: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 28 10:49:58.650: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kl9wg PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:49:58.650: INFO: >>> kubeConfig: /root/.kube/config I0428 10:49:58.676228 6 log.go:172] (0xc001e3ca50) (0xc0017e25a0) Create stream I0428 10:49:58.676270 6 log.go:172] (0xc001e3ca50) (0xc0017e25a0) Stream added, broadcasting: 1 I0428 10:49:58.679330 6 log.go:172] (0xc001e3ca50) Reply frame received for 1 I0428 10:49:58.679390 6 log.go:172] (0xc001e3ca50) (0xc000740a00) Create stream I0428 10:49:58.679409 6 log.go:172] (0xc001e3ca50) (0xc000740a00) Stream added, broadcasting: 3 I0428 10:49:58.680397 6 log.go:172] (0xc001e3ca50) Reply frame received for 3 I0428 10:49:58.680435 6 log.go:172] (0xc001e3ca50) (0xc0018b66e0) Create stream I0428 10:49:58.680448 6 log.go:172] (0xc001e3ca50) (0xc0018b66e0) Stream added, broadcasting: 5 I0428 10:49:58.681512 6 log.go:172] (0xc001e3ca50) Reply frame received for 5 I0428 10:49:58.756929 6 log.go:172] (0xc001e3ca50) Data frame received for 5 I0428 10:49:58.756977 6 log.go:172] (0xc0018b66e0) (5) Data frame handling I0428 10:49:58.757012 6 log.go:172] (0xc001e3ca50) Data frame received for 3 I0428 10:49:58.757031 6 log.go:172] (0xc000740a00) (3) Data frame handling I0428 10:49:58.757060 6 log.go:172] (0xc000740a00) (3) Data frame sent I0428 10:49:58.757078 6 log.go:172] (0xc001e3ca50) Data frame received for 3 I0428 10:49:58.757092 6 log.go:172] (0xc000740a00) (3) Data frame handling I0428 10:49:58.758762 6 log.go:172] (0xc001e3ca50) Data frame received for 1 I0428 10:49:58.758802 6 log.go:172] (0xc0017e25a0) (1) Data frame handling I0428 10:49:58.758839 6 log.go:172] (0xc0017e25a0) (1) Data frame sent I0428 10:49:58.758867 6 log.go:172] (0xc001e3ca50) (0xc0017e25a0) Stream removed, broadcasting: 1 I0428 10:49:58.758900 6 log.go:172] (0xc001e3ca50) Go away received I0428 10:49:58.759029 6 log.go:172] (0xc001e3ca50) (0xc0017e25a0) Stream removed, broadcasting: 1 I0428 10:49:58.759062 6 log.go:172] (0xc001e3ca50) (0xc000740a00) Stream removed, broadcasting: 3 I0428 10:49:58.759074 6 log.go:172] (0xc001e3ca50) (0xc0018b66e0) Stream removed, broadcasting: 5 Apr 28 10:49:58.759: INFO: Exec stderr: "" Apr 28 10:49:58.759: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kl9wg PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:49:58.759: INFO: >>> kubeConfig: /root/.kube/config I0428 10:49:58.794354 6 log.go:172] (0xc000c6efd0) (0xc0018b6960) Create stream I0428 10:49:58.794381 6 log.go:172] (0xc000c6efd0) (0xc0018b6960) Stream added, broadcasting: 1 I0428 10:49:58.797072 6 log.go:172] (0xc000c6efd0) Reply frame received for 1 I0428 10:49:58.797274 6 log.go:172] (0xc000c6efd0) (0xc000a163c0) Create stream I0428 10:49:58.797309 6 log.go:172] (0xc000c6efd0) (0xc000a163c0) Stream added, broadcasting: 3 I0428 10:49:58.798890 6 log.go:172] (0xc000c6efd0) Reply frame received for 3 I0428 10:49:58.798956 6 log.go:172] (0xc000c6efd0) (0xc000a16500) Create stream I0428 10:49:58.798979 6 log.go:172] (0xc000c6efd0) (0xc000a16500) Stream added, broadcasting: 5 I0428 10:49:58.800183 6 log.go:172] (0xc000c6efd0) Reply frame received for 5 I0428 10:49:58.857060 6 log.go:172] (0xc000c6efd0) Data frame received for 5 I0428 10:49:58.857084 6 log.go:172] (0xc000a16500) (5) Data frame handling I0428 10:49:58.857101 6 log.go:172] (0xc000c6efd0) Data frame received for 3 I0428 10:49:58.857196 6 log.go:172] (0xc000a163c0) (3) Data frame handling I0428 10:49:58.857205 6 log.go:172] (0xc000a163c0) (3) Data frame sent I0428 10:49:58.857210 6 log.go:172] (0xc000c6efd0) Data frame received for 3 I0428 10:49:58.857214 6 log.go:172] (0xc000a163c0) (3) Data frame handling I0428 10:49:58.859023 6 log.go:172] (0xc000c6efd0) Data frame received for 1 I0428 10:49:58.859054 6 log.go:172] (0xc0018b6960) (1) Data frame handling I0428 10:49:58.859075 6 log.go:172] (0xc0018b6960) (1) Data frame sent I0428 10:49:58.859089 6 log.go:172] (0xc000c6efd0) (0xc0018b6960) Stream removed, broadcasting: 1 I0428 10:49:58.859105 6 log.go:172] (0xc000c6efd0) Go away received I0428 10:49:58.859292 6 log.go:172] (0xc000c6efd0) (0xc0018b6960) Stream removed, broadcasting: 1 I0428 10:49:58.859324 6 log.go:172] (0xc000c6efd0) (0xc000a163c0) Stream removed, broadcasting: 3 I0428 10:49:58.859356 6 log.go:172] (0xc000c6efd0) (0xc000a16500) Stream removed, broadcasting: 5 Apr 28 10:49:58.859: INFO: Exec stderr: "" Apr 28 10:49:58.859: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kl9wg PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:49:58.859: INFO: >>> kubeConfig: /root/.kube/config I0428 10:49:58.894302 6 log.go:172] (0xc0000eb760) (0xc0010c2820) Create stream I0428 10:49:58.894344 6 log.go:172] (0xc0000eb760) (0xc0010c2820) Stream added, broadcasting: 1 I0428 10:49:58.896911 6 log.go:172] (0xc0000eb760) Reply frame received for 1 I0428 10:49:58.896966 6 log.go:172] (0xc0000eb760) (0xc000a16640) Create stream I0428 10:49:58.896989 6 log.go:172] (0xc0000eb760) (0xc000a16640) Stream added, broadcasting: 3 I0428 10:49:58.898443 6 log.go:172] (0xc0000eb760) Reply frame received for 3 I0428 10:49:58.898479 6 log.go:172] (0xc0000eb760) (0xc0018b6a00) Create stream I0428 10:49:58.898488 6 log.go:172] (0xc0000eb760) (0xc0018b6a00) Stream added, broadcasting: 5 I0428 10:49:58.899687 6 log.go:172] (0xc0000eb760) Reply frame received for 5 I0428 10:49:58.963265 6 log.go:172] (0xc0000eb760) Data frame received for 3 I0428 10:49:58.963297 6 log.go:172] (0xc000a16640) (3) Data frame handling I0428 10:49:58.963305 6 log.go:172] (0xc000a16640) (3) Data frame sent I0428 10:49:58.963310 6 log.go:172] (0xc0000eb760) Data frame received for 3 I0428 10:49:58.963314 6 log.go:172] (0xc000a16640) (3) Data frame handling I0428 10:49:58.963338 6 log.go:172] (0xc0000eb760) Data frame received for 5 I0428 10:49:58.963389 6 log.go:172] (0xc0018b6a00) (5) Data frame handling I0428 10:49:58.965081 6 log.go:172] (0xc0000eb760) Data frame received for 1 I0428 10:49:58.965104 6 log.go:172] (0xc0010c2820) (1) Data frame handling I0428 10:49:58.965232 6 log.go:172] (0xc0010c2820) (1) Data frame sent I0428 10:49:58.965256 6 log.go:172] (0xc0000eb760) (0xc0010c2820) Stream removed, broadcasting: 1 I0428 10:49:58.965281 6 log.go:172] (0xc0000eb760) Go away received I0428 10:49:58.965364 6 log.go:172] (0xc0000eb760) (0xc0010c2820) Stream removed, broadcasting: 1 I0428 10:49:58.965383 6 log.go:172] (0xc0000eb760) (0xc000a16640) Stream removed, broadcasting: 3 I0428 10:49:58.965389 6 log.go:172] (0xc0000eb760) (0xc0018b6a00) Stream removed, broadcasting: 5 Apr 28 10:49:58.965: INFO: Exec stderr: "" Apr 28 10:49:58.965: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-kl9wg PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:49:58.965: INFO: >>> kubeConfig: /root/.kube/config I0428 10:49:59.000921 6 log.go:172] (0xc000c6f4a0) (0xc0018b6dc0) Create stream I0428 10:49:59.000943 6 log.go:172] (0xc000c6f4a0) (0xc0018b6dc0) Stream added, broadcasting: 1 I0428 10:49:59.003729 6 log.go:172] (0xc000c6f4a0) Reply frame received for 1 I0428 10:49:59.003785 6 log.go:172] (0xc000c6f4a0) (0xc0017e2640) Create stream I0428 10:49:59.003808 6 log.go:172] (0xc000c6f4a0) (0xc0017e2640) Stream added, broadcasting: 3 I0428 10:49:59.004849 6 log.go:172] (0xc000c6f4a0) Reply frame received for 3 I0428 10:49:59.004912 6 log.go:172] (0xc000c6f4a0) (0xc000740d20) Create stream I0428 10:49:59.004932 6 log.go:172] (0xc000c6f4a0) (0xc000740d20) Stream added, broadcasting: 5 I0428 10:49:59.006066 6 log.go:172] (0xc000c6f4a0) Reply frame received for 5 I0428 10:49:59.070595 6 log.go:172] (0xc000c6f4a0) Data frame received for 3 I0428 10:49:59.070631 6 log.go:172] (0xc0017e2640) (3) Data frame handling I0428 10:49:59.070639 6 log.go:172] (0xc0017e2640) (3) Data frame sent I0428 10:49:59.070651 6 log.go:172] (0xc000c6f4a0) Data frame received for 3 I0428 10:49:59.070655 6 log.go:172] (0xc0017e2640) (3) Data frame handling I0428 10:49:59.070674 6 log.go:172] (0xc000c6f4a0) Data frame received for 5 I0428 10:49:59.070679 6 log.go:172] (0xc000740d20) (5) Data frame handling I0428 10:49:59.072177 6 log.go:172] (0xc000c6f4a0) Data frame received for 1 I0428 10:49:59.072209 6 log.go:172] (0xc0018b6dc0) (1) Data frame handling I0428 10:49:59.072242 6 log.go:172] (0xc0018b6dc0) (1) Data frame sent I0428 10:49:59.072266 6 log.go:172] (0xc000c6f4a0) (0xc0018b6dc0) Stream removed, broadcasting: 1 I0428 10:49:59.072388 6 log.go:172] (0xc000c6f4a0) Go away received I0428 10:49:59.072446 6 log.go:172] (0xc000c6f4a0) (0xc0018b6dc0) Stream removed, broadcasting: 1 I0428 10:49:59.072481 6 log.go:172] (0xc000c6f4a0) (0xc0017e2640) Stream removed, broadcasting: 3 I0428 10:49:59.072491 6 log.go:172] (0xc000c6f4a0) (0xc000740d20) Stream removed, broadcasting: 5 Apr 28 10:49:59.072: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:49:59.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-kl9wg" for this suite. Apr 28 10:50:45.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:50:45.141: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-kl9wg, resource: bindings, ignored listing per whitelist Apr 28 10:50:45.213: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-kl9wg deletion completed in 46.137043547s • [SLOW TEST:57.394 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:50:45.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-1cdbb49c-893e-11ea-80e8-0242ac11000f Apr 28 10:50:45.358: INFO: Pod name my-hostname-basic-1cdbb49c-893e-11ea-80e8-0242ac11000f: Found 0 pods out of 1 Apr 28 10:50:50.363: INFO: Pod name my-hostname-basic-1cdbb49c-893e-11ea-80e8-0242ac11000f: Found 1 pods out of 1 Apr 28 10:50:50.363: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1cdbb49c-893e-11ea-80e8-0242ac11000f" are running Apr 28 10:50:50.366: INFO: Pod "my-hostname-basic-1cdbb49c-893e-11ea-80e8-0242ac11000f-x8b5g" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 10:50:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 10:50:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 10:50:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 10:50:45 +0000 UTC Reason: Message:}]) Apr 28 10:50:50.366: INFO: Trying to dial the pod Apr 28 10:50:55.378: INFO: Controller my-hostname-basic-1cdbb49c-893e-11ea-80e8-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-1cdbb49c-893e-11ea-80e8-0242ac11000f-x8b5g]: "my-hostname-basic-1cdbb49c-893e-11ea-80e8-0242ac11000f-x8b5g", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:50:55.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-brht2" for this suite. Apr 28 10:51:01.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:51:01.503: INFO: namespace: e2e-tests-replication-controller-brht2, resource: bindings, ignored listing per whitelist Apr 28 10:51:01.517: INFO: namespace e2e-tests-replication-controller-brht2 deletion completed in 6.134462572s • [SLOW TEST:16.304 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:51:01.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-268e7df9-893e-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 10:51:01.629: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2690b191-893e-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-2sdxj" to be "success or failure" Apr 28 10:51:01.647: INFO: Pod "pod-projected-secrets-2690b191-893e-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.922423ms Apr 28 10:51:03.718: INFO: Pod "pod-projected-secrets-2690b191-893e-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088868743s Apr 28 10:51:05.722: INFO: Pod "pod-projected-secrets-2690b191-893e-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093035329s STEP: Saw pod success Apr 28 10:51:05.722: INFO: Pod "pod-projected-secrets-2690b191-893e-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 10:51:05.726: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-2690b191-893e-11ea-80e8-0242ac11000f container secret-volume-test: STEP: delete the pod Apr 28 10:51:05.744: INFO: Waiting for pod pod-projected-secrets-2690b191-893e-11ea-80e8-0242ac11000f to disappear Apr 28 10:51:05.748: INFO: Pod pod-projected-secrets-2690b191-893e-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:51:05.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2sdxj" for this suite. Apr 28 10:51:11.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:51:11.806: INFO: namespace: e2e-tests-projected-2sdxj, resource: bindings, ignored listing per whitelist Apr 28 10:51:11.862: INFO: namespace e2e-tests-projected-2sdxj deletion completed in 6.110317264s • [SLOW TEST:10.344 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:51:11.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 28 10:51:11.990: INFO: Waiting up to 5m0s for pod "pod-2cbca0b7-893e-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-p7gng" to be "success or failure" Apr 28 10:51:11.994: INFO: Pod "pod-2cbca0b7-893e-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.8694ms Apr 28 10:51:13.998: INFO: Pod "pod-2cbca0b7-893e-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007769971s Apr 28 10:51:16.002: INFO: Pod "pod-2cbca0b7-893e-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012000622s STEP: Saw pod success Apr 28 10:51:16.002: INFO: Pod "pod-2cbca0b7-893e-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 10:51:16.006: INFO: Trying to get logs from node hunter-worker pod pod-2cbca0b7-893e-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 10:51:16.026: INFO: Waiting for pod pod-2cbca0b7-893e-11ea-80e8-0242ac11000f to disappear Apr 28 10:51:16.030: INFO: Pod pod-2cbca0b7-893e-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:51:16.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-p7gng" for this suite. Apr 28 10:51:22.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:51:22.081: INFO: namespace: e2e-tests-emptydir-p7gng, resource: bindings, ignored listing per whitelist Apr 28 10:51:22.121: INFO: namespace e2e-tests-emptydir-p7gng deletion completed in 6.087021708s • [SLOW TEST:10.259 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:51:22.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 28 10:51:22.256: INFO: Waiting up to 5m0s for pod "pod-32dbf815-893e-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-sm85f" to be "success or failure" Apr 28 10:51:22.262: INFO: Pod "pod-32dbf815-893e-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.980491ms Apr 28 10:51:24.292: INFO: Pod "pod-32dbf815-893e-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036504909s Apr 28 10:51:26.296: INFO: Pod "pod-32dbf815-893e-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040368138s STEP: Saw pod success Apr 28 10:51:26.296: INFO: Pod "pod-32dbf815-893e-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 10:51:26.299: INFO: Trying to get logs from node hunter-worker pod pod-32dbf815-893e-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 10:51:26.347: INFO: Waiting for pod pod-32dbf815-893e-11ea-80e8-0242ac11000f to disappear Apr 28 10:51:26.370: INFO: Pod pod-32dbf815-893e-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:51:26.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sm85f" for this suite. Apr 28 10:51:32.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:51:32.457: INFO: namespace: e2e-tests-emptydir-sm85f, resource: bindings, ignored listing per whitelist Apr 28 10:51:32.487: INFO: namespace e2e-tests-emptydir-sm85f deletion completed in 6.113698907s • [SLOW TEST:10.366 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:51:32.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Apr 28 10:51:37.134: INFO: Successfully updated pod "labelsupdate39045e15-893e-11ea-80e8-0242ac11000f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:51:39.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-n2pct" for this suite. Apr 28 10:52:01.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:52:01.223: INFO: namespace: e2e-tests-downward-api-n2pct, resource: bindings, ignored listing per whitelist Apr 28 10:52:01.267: INFO: namespace e2e-tests-downward-api-n2pct deletion completed in 22.113227644s • [SLOW TEST:28.780 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:52:01.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-jjxd STEP: Creating a pod to test atomic-volume-subpath Apr 28 10:52:01.396: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jjxd" in namespace "e2e-tests-subpath-lz7bx" to be "success or failure" Apr 28 10:52:01.400: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.882061ms Apr 28 10:52:03.405: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008623118s Apr 28 10:52:05.431: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035137866s Apr 28 10:52:07.435: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Running", Reason="", readiness=true. Elapsed: 6.03869163s Apr 28 10:52:09.439: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Running", Reason="", readiness=false. Elapsed: 8.043036343s Apr 28 10:52:11.444: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Running", Reason="", readiness=false. Elapsed: 10.047989991s Apr 28 10:52:13.449: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Running", Reason="", readiness=false. Elapsed: 12.052997543s Apr 28 10:52:15.453: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Running", Reason="", readiness=false. Elapsed: 14.056934272s Apr 28 10:52:17.457: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Running", Reason="", readiness=false. Elapsed: 16.060730841s Apr 28 10:52:19.461: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Running", Reason="", readiness=false. Elapsed: 18.065019322s Apr 28 10:52:21.465: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Running", Reason="", readiness=false. Elapsed: 20.068612161s Apr 28 10:52:23.469: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Running", Reason="", readiness=false. Elapsed: 22.072958936s Apr 28 10:52:25.473: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Running", Reason="", readiness=false. Elapsed: 24.076864579s Apr 28 10:52:27.485: INFO: Pod "pod-subpath-test-secret-jjxd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.089032298s STEP: Saw pod success Apr 28 10:52:27.485: INFO: Pod "pod-subpath-test-secret-jjxd" satisfied condition "success or failure" Apr 28 10:52:27.488: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-jjxd container test-container-subpath-secret-jjxd: STEP: delete the pod Apr 28 10:52:27.528: INFO: Waiting for pod pod-subpath-test-secret-jjxd to disappear Apr 28 10:52:27.541: INFO: Pod pod-subpath-test-secret-jjxd no longer exists STEP: Deleting pod pod-subpath-test-secret-jjxd Apr 28 10:52:27.541: INFO: Deleting pod "pod-subpath-test-secret-jjxd" in namespace "e2e-tests-subpath-lz7bx" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:52:27.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lz7bx" for this suite. Apr 28 10:52:33.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:52:33.592: INFO: namespace: e2e-tests-subpath-lz7bx, resource: bindings, ignored listing per whitelist Apr 28 10:52:33.667: INFO: namespace e2e-tests-subpath-lz7bx deletion completed in 6.120251645s • [SLOW TEST:32.399 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:52:33.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-lbqj STEP: Creating a pod to test atomic-volume-subpath Apr 28 10:52:33.798: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lbqj" in namespace "e2e-tests-subpath-9b4ws" to be "success or failure" Apr 28 10:52:33.802: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0445ms Apr 28 10:52:35.807: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009189551s Apr 28 10:52:37.833: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034784389s Apr 28 10:52:39.837: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Running", Reason="", readiness=true. Elapsed: 6.03930393s Apr 28 10:52:41.842: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Running", Reason="", readiness=false. Elapsed: 8.043838522s Apr 28 10:52:43.846: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Running", Reason="", readiness=false. Elapsed: 10.048287046s Apr 28 10:52:45.850: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Running", Reason="", readiness=false. Elapsed: 12.052360823s Apr 28 10:52:47.855: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Running", Reason="", readiness=false. Elapsed: 14.056526222s Apr 28 10:52:49.859: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Running", Reason="", readiness=false. Elapsed: 16.061089457s Apr 28 10:52:51.863: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Running", Reason="", readiness=false. Elapsed: 18.065187576s Apr 28 10:52:53.868: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Running", Reason="", readiness=false. Elapsed: 20.069576804s Apr 28 10:52:55.871: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Running", Reason="", readiness=false. Elapsed: 22.073332232s Apr 28 10:52:57.876: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Running", Reason="", readiness=false. Elapsed: 24.077736573s Apr 28 10:52:59.893: INFO: Pod "pod-subpath-test-configmap-lbqj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.095177693s STEP: Saw pod success Apr 28 10:52:59.893: INFO: Pod "pod-subpath-test-configmap-lbqj" satisfied condition "success or failure" Apr 28 10:52:59.896: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-lbqj container test-container-subpath-configmap-lbqj: STEP: delete the pod Apr 28 10:52:59.932: INFO: Waiting for pod pod-subpath-test-configmap-lbqj to disappear Apr 28 10:52:59.943: INFO: Pod pod-subpath-test-configmap-lbqj no longer exists STEP: Deleting pod pod-subpath-test-configmap-lbqj Apr 28 10:52:59.943: INFO: Deleting pod "pod-subpath-test-configmap-lbqj" in namespace "e2e-tests-subpath-9b4ws" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:52:59.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-9b4ws" for this suite. Apr 28 10:53:05.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:53:05.984: INFO: namespace: e2e-tests-subpath-9b4ws, resource: bindings, ignored listing per whitelist Apr 28 10:53:06.041: INFO: namespace e2e-tests-subpath-9b4ws deletion completed in 6.090934121s • [SLOW TEST:32.373 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:53:06.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-4vr5w Apr 28 10:53:10.178: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-4vr5w STEP: checking the pod's current state and verifying that restartCount is present Apr 28 10:53:10.181: INFO: Initial restart count of pod liveness-http is 0 Apr 28 10:53:28.250: INFO: Restart count of pod e2e-tests-container-probe-4vr5w/liveness-http is now 1 (18.068654876s elapsed) Apr 28 10:53:48.395: INFO: Restart count of pod e2e-tests-container-probe-4vr5w/liveness-http is now 2 (38.21325309s elapsed) Apr 28 10:54:08.510: INFO: Restart count of pod e2e-tests-container-probe-4vr5w/liveness-http is now 3 (58.328454873s elapsed) Apr 28 10:54:28.578: INFO: Restart count of pod e2e-tests-container-probe-4vr5w/liveness-http is now 4 (1m18.396639089s elapsed) Apr 28 10:55:28.791: INFO: Restart count of pod e2e-tests-container-probe-4vr5w/liveness-http is now 5 (2m18.609668988s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:55:28.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-4vr5w" for this suite. Apr 28 10:55:34.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:55:34.913: INFO: namespace: e2e-tests-container-probe-4vr5w, resource: bindings, ignored listing per whitelist Apr 28 10:55:34.919: INFO: namespace e2e-tests-container-probe-4vr5w deletion completed in 6.082609024s • [SLOW TEST:148.878 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:55:34.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-c983e7b5-893e-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 10:55:35.047: INFO: Waiting up to 5m0s for pod "pod-secrets-c98632c1-893e-11ea-80e8-0242ac11000f" in namespace "e2e-tests-secrets-w6g2t" to be "success or failure" Apr 28 10:55:35.076: INFO: Pod "pod-secrets-c98632c1-893e-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.323364ms Apr 28 10:55:37.080: INFO: Pod "pod-secrets-c98632c1-893e-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033465159s Apr 28 10:55:39.084: INFO: Pod "pod-secrets-c98632c1-893e-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036866734s STEP: Saw pod success Apr 28 10:55:39.084: INFO: Pod "pod-secrets-c98632c1-893e-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 10:55:39.086: INFO: Trying to get logs from node hunter-worker pod pod-secrets-c98632c1-893e-11ea-80e8-0242ac11000f container secret-volume-test: STEP: delete the pod Apr 28 10:55:39.116: INFO: Waiting for pod pod-secrets-c98632c1-893e-11ea-80e8-0242ac11000f to disappear Apr 28 10:55:39.131: INFO: Pod pod-secrets-c98632c1-893e-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:55:39.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-w6g2t" for this suite. Apr 28 10:55:45.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:55:45.159: INFO: namespace: e2e-tests-secrets-w6g2t, resource: bindings, ignored listing per whitelist Apr 28 10:55:45.217: INFO: namespace e2e-tests-secrets-w6g2t deletion completed in 6.083652155s • [SLOW TEST:10.299 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:55:45.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nz6mg [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Apr 28 10:55:45.342: INFO: Found 0 stateful pods, waiting for 3 Apr 28 10:55:55.349: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 10:55:55.349: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 10:55:55.349: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 28 10:56:05.347: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 10:56:05.347: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 10:56:05.347: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 28 10:56:05.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nz6mg ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 10:56:05.588: INFO: stderr: "I0428 10:56:05.487012 400 log.go:172] (0xc000162840) (0xc00076e640) Create stream\nI0428 10:56:05.487073 400 log.go:172] (0xc000162840) (0xc00076e640) Stream added, broadcasting: 1\nI0428 10:56:05.489736 400 log.go:172] (0xc000162840) Reply frame received for 1\nI0428 10:56:05.489785 400 log.go:172] (0xc000162840) (0xc000552d20) Create stream\nI0428 10:56:05.489805 400 log.go:172] (0xc000162840) (0xc000552d20) Stream added, broadcasting: 3\nI0428 10:56:05.490822 400 log.go:172] (0xc000162840) Reply frame received for 3\nI0428 10:56:05.490877 400 log.go:172] (0xc000162840) (0xc0007be000) Create stream\nI0428 10:56:05.490894 400 log.go:172] (0xc000162840) (0xc0007be000) Stream added, broadcasting: 5\nI0428 10:56:05.491937 400 log.go:172] (0xc000162840) Reply frame received for 5\nI0428 10:56:05.581419 400 log.go:172] (0xc000162840) Data frame received for 5\nI0428 10:56:05.581486 400 log.go:172] (0xc0007be000) (5) Data frame handling\nI0428 10:56:05.581524 400 log.go:172] (0xc000162840) Data frame received for 3\nI0428 10:56:05.581553 400 log.go:172] (0xc000552d20) (3) Data frame handling\nI0428 10:56:05.581588 400 log.go:172] (0xc000552d20) (3) Data frame sent\nI0428 10:56:05.581621 400 log.go:172] (0xc000162840) Data frame received for 3\nI0428 10:56:05.581650 400 log.go:172] (0xc000552d20) (3) Data frame handling\nI0428 10:56:05.583557 400 log.go:172] (0xc000162840) Data frame received for 1\nI0428 10:56:05.583582 400 log.go:172] (0xc00076e640) (1) Data frame handling\nI0428 10:56:05.583608 400 log.go:172] (0xc00076e640) (1) Data frame sent\nI0428 10:56:05.583625 400 log.go:172] (0xc000162840) (0xc00076e640) Stream removed, broadcasting: 1\nI0428 10:56:05.583659 400 log.go:172] (0xc000162840) Go away received\nI0428 10:56:05.583829 400 log.go:172] (0xc000162840) (0xc00076e640) Stream removed, broadcasting: 1\nI0428 10:56:05.583843 400 log.go:172] (0xc000162840) (0xc000552d20) Stream removed, broadcasting: 3\nI0428 10:56:05.583857 400 log.go:172] (0xc000162840) (0xc0007be000) Stream removed, broadcasting: 5\n" Apr 28 10:56:05.588: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 10:56:05.588: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 28 10:56:15.618: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 28 10:56:25.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nz6mg ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 10:56:25.886: INFO: stderr: "I0428 10:56:25.814033 422 log.go:172] (0xc000148630) (0xc000133400) Create stream\nI0428 10:56:25.814117 422 log.go:172] (0xc000148630) (0xc000133400) Stream added, broadcasting: 1\nI0428 10:56:25.816903 422 log.go:172] (0xc000148630) Reply frame received for 1\nI0428 10:56:25.816939 422 log.go:172] (0xc000148630) (0xc0001334a0) Create stream\nI0428 10:56:25.816951 422 log.go:172] (0xc000148630) (0xc0001334a0) Stream added, broadcasting: 3\nI0428 10:56:25.818041 422 log.go:172] (0xc000148630) Reply frame received for 3\nI0428 10:56:25.818119 422 log.go:172] (0xc000148630) (0xc0003a0000) Create stream\nI0428 10:56:25.818143 422 log.go:172] (0xc000148630) (0xc0003a0000) Stream added, broadcasting: 5\nI0428 10:56:25.819029 422 log.go:172] (0xc000148630) Reply frame received for 5\nI0428 10:56:25.881030 422 log.go:172] (0xc000148630) Data frame received for 5\nI0428 10:56:25.881093 422 log.go:172] (0xc0003a0000) (5) Data frame handling\nI0428 10:56:25.881278 422 log.go:172] (0xc000148630) Data frame received for 3\nI0428 10:56:25.881325 422 log.go:172] (0xc0001334a0) (3) Data frame handling\nI0428 10:56:25.881356 422 log.go:172] (0xc0001334a0) (3) Data frame sent\nI0428 10:56:25.881377 422 log.go:172] (0xc000148630) Data frame received for 3\nI0428 10:56:25.881394 422 log.go:172] (0xc0001334a0) (3) Data frame handling\nI0428 10:56:25.882529 422 log.go:172] (0xc000148630) Data frame received for 1\nI0428 10:56:25.882545 422 log.go:172] (0xc000133400) (1) Data frame handling\nI0428 10:56:25.882552 422 log.go:172] (0xc000133400) (1) Data frame sent\nI0428 10:56:25.882565 422 log.go:172] (0xc000148630) (0xc000133400) Stream removed, broadcasting: 1\nI0428 10:56:25.882582 422 log.go:172] (0xc000148630) Go away received\nI0428 10:56:25.882902 422 log.go:172] (0xc000148630) (0xc000133400) Stream removed, broadcasting: 1\nI0428 10:56:25.882938 422 log.go:172] (0xc000148630) (0xc0001334a0) Stream removed, broadcasting: 3\nI0428 10:56:25.882958 422 log.go:172] (0xc000148630) (0xc0003a0000) Stream removed, broadcasting: 5\n" Apr 28 10:56:25.886: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 10:56:25.886: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 10:56:45.907: INFO: Waiting for StatefulSet e2e-tests-statefulset-nz6mg/ss2 to complete update Apr 28 10:56:45.907: INFO: Waiting for Pod e2e-tests-statefulset-nz6mg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 28 10:56:55.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nz6mg ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 10:56:56.168: INFO: stderr: "I0428 10:56:56.040463 444 log.go:172] (0xc000166840) (0xc000756640) Create stream\nI0428 10:56:56.040518 444 log.go:172] (0xc000166840) (0xc000756640) Stream added, broadcasting: 1\nI0428 10:56:56.049002 444 log.go:172] (0xc000166840) Reply frame received for 1\nI0428 10:56:56.049041 444 log.go:172] (0xc000166840) (0xc0007566e0) Create stream\nI0428 10:56:56.049050 444 log.go:172] (0xc000166840) (0xc0007566e0) Stream added, broadcasting: 3\nI0428 10:56:56.050122 444 log.go:172] (0xc000166840) Reply frame received for 3\nI0428 10:56:56.050148 444 log.go:172] (0xc000166840) (0xc000654d20) Create stream\nI0428 10:56:56.050159 444 log.go:172] (0xc000166840) (0xc000654d20) Stream added, broadcasting: 5\nI0428 10:56:56.051021 444 log.go:172] (0xc000166840) Reply frame received for 5\nI0428 10:56:56.162270 444 log.go:172] (0xc000166840) Data frame received for 5\nI0428 10:56:56.162305 444 log.go:172] (0xc000654d20) (5) Data frame handling\nI0428 10:56:56.162339 444 log.go:172] (0xc000166840) Data frame received for 3\nI0428 10:56:56.162349 444 log.go:172] (0xc0007566e0) (3) Data frame handling\nI0428 10:56:56.162364 444 log.go:172] (0xc0007566e0) (3) Data frame sent\nI0428 10:56:56.162388 444 log.go:172] (0xc000166840) Data frame received for 3\nI0428 10:56:56.162398 444 log.go:172] (0xc0007566e0) (3) Data frame handling\nI0428 10:56:56.164113 444 log.go:172] (0xc000166840) Data frame received for 1\nI0428 10:56:56.164144 444 log.go:172] (0xc000756640) (1) Data frame handling\nI0428 10:56:56.164190 444 log.go:172] (0xc000756640) (1) Data frame sent\nI0428 10:56:56.164221 444 log.go:172] (0xc000166840) (0xc000756640) Stream removed, broadcasting: 1\nI0428 10:56:56.164260 444 log.go:172] (0xc000166840) Go away received\nI0428 10:56:56.164389 444 log.go:172] (0xc000166840) (0xc000756640) Stream removed, broadcasting: 1\nI0428 10:56:56.164401 444 log.go:172] (0xc000166840) (0xc0007566e0) Stream removed, broadcasting: 3\nI0428 10:56:56.164408 444 log.go:172] (0xc000166840) (0xc000654d20) Stream removed, broadcasting: 5\n" Apr 28 10:56:56.169: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 10:56:56.169: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 10:57:06.201: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 28 10:57:16.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nz6mg ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 10:57:16.460: INFO: stderr: "I0428 10:57:16.368682 466 log.go:172] (0xc0008662c0) (0xc0007c5360) Create stream\nI0428 10:57:16.368766 466 log.go:172] (0xc0008662c0) (0xc0007c5360) Stream added, broadcasting: 1\nI0428 10:57:16.371387 466 log.go:172] (0xc0008662c0) Reply frame received for 1\nI0428 10:57:16.371446 466 log.go:172] (0xc0008662c0) (0xc000614000) Create stream\nI0428 10:57:16.371464 466 log.go:172] (0xc0008662c0) (0xc000614000) Stream added, broadcasting: 3\nI0428 10:57:16.372368 466 log.go:172] (0xc0008662c0) Reply frame received for 3\nI0428 10:57:16.372390 466 log.go:172] (0xc0008662c0) (0xc0007c5400) Create stream\nI0428 10:57:16.372399 466 log.go:172] (0xc0008662c0) (0xc0007c5400) Stream added, broadcasting: 5\nI0428 10:57:16.373460 466 log.go:172] (0xc0008662c0) Reply frame received for 5\nI0428 10:57:16.455090 466 log.go:172] (0xc0008662c0) Data frame received for 5\nI0428 10:57:16.455138 466 log.go:172] (0xc0008662c0) Data frame received for 3\nI0428 10:57:16.455171 466 log.go:172] (0xc000614000) (3) Data frame handling\nI0428 10:57:16.455194 466 log.go:172] (0xc000614000) (3) Data frame sent\nI0428 10:57:16.455205 466 log.go:172] (0xc0008662c0) Data frame received for 3\nI0428 10:57:16.455215 466 log.go:172] (0xc000614000) (3) Data frame handling\nI0428 10:57:16.455246 466 log.go:172] (0xc0007c5400) (5) Data frame handling\nI0428 10:57:16.456804 466 log.go:172] (0xc0008662c0) Data frame received for 1\nI0428 10:57:16.456826 466 log.go:172] (0xc0007c5360) (1) Data frame handling\nI0428 10:57:16.456860 466 log.go:172] (0xc0007c5360) (1) Data frame sent\nI0428 10:57:16.456882 466 log.go:172] (0xc0008662c0) (0xc0007c5360) Stream removed, broadcasting: 1\nI0428 10:57:16.456955 466 log.go:172] (0xc0008662c0) Go away received\nI0428 10:57:16.457061 466 log.go:172] (0xc0008662c0) (0xc0007c5360) Stream removed, broadcasting: 1\nI0428 10:57:16.457082 466 log.go:172] (0xc0008662c0) (0xc000614000) Stream removed, broadcasting: 3\nI0428 10:57:16.457095 466 log.go:172] (0xc0008662c0) (0xc0007c5400) Stream removed, broadcasting: 5\n" Apr 28 10:57:16.460: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 10:57:16.460: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 10:57:36.477: INFO: Waiting for StatefulSet e2e-tests-statefulset-nz6mg/ss2 to complete update Apr 28 10:57:36.477: INFO: Waiting for Pod e2e-tests-statefulset-nz6mg/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Apr 28 10:57:46.486: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nz6mg Apr 28 10:57:46.490: INFO: Scaling statefulset ss2 to 0 Apr 28 10:58:06.517: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 10:58:06.520: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:58:06.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nz6mg" for this suite. Apr 28 10:58:12.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:58:12.650: INFO: namespace: e2e-tests-statefulset-nz6mg, resource: bindings, ignored listing per whitelist Apr 28 10:58:12.667: INFO: namespace e2e-tests-statefulset-nz6mg deletion completed in 6.128334045s • [SLOW TEST:147.449 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:58:12.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 10:58:12.768: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2789fd0a-893f-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-n8qg4" to be "success or failure" Apr 28 10:58:12.772: INFO: Pod "downwardapi-volume-2789fd0a-893f-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012181ms Apr 28 10:58:14.776: INFO: Pod "downwardapi-volume-2789fd0a-893f-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008010786s Apr 28 10:58:16.780: INFO: Pod "downwardapi-volume-2789fd0a-893f-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012097134s STEP: Saw pod success Apr 28 10:58:16.780: INFO: Pod "downwardapi-volume-2789fd0a-893f-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 10:58:16.783: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2789fd0a-893f-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 10:58:16.803: INFO: Waiting for pod downwardapi-volume-2789fd0a-893f-11ea-80e8-0242ac11000f to disappear Apr 28 10:58:16.849: INFO: Pod downwardapi-volume-2789fd0a-893f-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:58:16.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n8qg4" for this suite. Apr 28 10:58:22.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:58:22.934: INFO: namespace: e2e-tests-projected-n8qg4, resource: bindings, ignored listing per whitelist Apr 28 10:58:22.943: INFO: namespace e2e-tests-projected-n8qg4 deletion completed in 6.089606397s • [SLOW TEST:10.276 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:58:22.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-lwkgq in namespace e2e-tests-proxy-99ctz I0428 10:58:23.106943 6 runners.go:184] Created replication controller with name: proxy-service-lwkgq, namespace: e2e-tests-proxy-99ctz, replica count: 1 I0428 10:58:24.157442 6 runners.go:184] proxy-service-lwkgq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 10:58:25.157649 6 runners.go:184] proxy-service-lwkgq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 10:58:26.157891 6 runners.go:184] proxy-service-lwkgq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 10:58:27.158132 6 runners.go:184] proxy-service-lwkgq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 10:58:28.158349 6 runners.go:184] proxy-service-lwkgq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 10:58:29.158613 6 runners.go:184] proxy-service-lwkgq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 28 10:58:29.162: INFO: setup took 6.136573428s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 28 10:58:29.169: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-99ctz/pods/proxy-service-lwkgq-rqvtj/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0428 10:58:50.085943 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 10:58:50.086: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:58:50.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-c22b8" for this suite. Apr 28 10:58:58.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:58:58.116: INFO: namespace: e2e-tests-gc-c22b8, resource: bindings, ignored listing per whitelist Apr 28 10:58:58.194: INFO: namespace e2e-tests-gc-c22b8 deletion completed in 8.104932812s • [SLOW TEST:19.918 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:58:58.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8dq5c STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 10:58:58.295: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 28 10:59:20.416: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.105:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8dq5c PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:59:20.416: INFO: >>> kubeConfig: /root/.kube/config I0428 10:59:20.452631 6 log.go:172] (0xc000a2a4d0) (0xc0001a6960) Create stream I0428 10:59:20.452669 6 log.go:172] (0xc000a2a4d0) (0xc0001a6960) Stream added, broadcasting: 1 I0428 10:59:20.455444 6 log.go:172] (0xc000a2a4d0) Reply frame received for 1 I0428 10:59:20.455490 6 log.go:172] (0xc000a2a4d0) (0xc001e16e60) Create stream I0428 10:59:20.455511 6 log.go:172] (0xc000a2a4d0) (0xc001e16e60) Stream added, broadcasting: 3 I0428 10:59:20.456573 6 log.go:172] (0xc000a2a4d0) Reply frame received for 3 I0428 10:59:20.456627 6 log.go:172] (0xc000a2a4d0) (0xc000713540) Create stream I0428 10:59:20.456651 6 log.go:172] (0xc000a2a4d0) (0xc000713540) Stream added, broadcasting: 5 I0428 10:59:20.457937 6 log.go:172] (0xc000a2a4d0) Reply frame received for 5 I0428 10:59:20.568327 6 log.go:172] (0xc000a2a4d0) Data frame received for 3 I0428 10:59:20.568356 6 log.go:172] (0xc001e16e60) (3) Data frame handling I0428 10:59:20.568370 6 log.go:172] (0xc001e16e60) (3) Data frame sent I0428 10:59:20.568378 6 log.go:172] (0xc000a2a4d0) Data frame received for 3 I0428 10:59:20.568383 6 log.go:172] (0xc001e16e60) (3) Data frame handling I0428 10:59:20.568447 6 log.go:172] (0xc000a2a4d0) Data frame received for 5 I0428 10:59:20.568486 6 log.go:172] (0xc000713540) (5) Data frame handling I0428 10:59:20.570542 6 log.go:172] (0xc000a2a4d0) Data frame received for 1 I0428 10:59:20.570583 6 log.go:172] (0xc0001a6960) (1) Data frame handling I0428 10:59:20.570598 6 log.go:172] (0xc0001a6960) (1) Data frame sent I0428 10:59:20.570616 6 log.go:172] (0xc000a2a4d0) (0xc0001a6960) Stream removed, broadcasting: 1 I0428 10:59:20.570636 6 log.go:172] (0xc000a2a4d0) Go away received I0428 10:59:20.570763 6 log.go:172] (0xc000a2a4d0) (0xc0001a6960) Stream removed, broadcasting: 1 I0428 10:59:20.570788 6 log.go:172] (0xc000a2a4d0) (0xc001e16e60) Stream removed, broadcasting: 3 I0428 10:59:20.570803 6 log.go:172] (0xc000a2a4d0) (0xc000713540) Stream removed, broadcasting: 5 Apr 28 10:59:20.570: INFO: Found all expected endpoints: [netserver-0] Apr 28 10:59:20.574: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.230:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8dq5c PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 10:59:20.574: INFO: >>> kubeConfig: /root/.kube/config I0428 10:59:20.607348 6 log.go:172] (0xc0000eaf20) (0xc00045a0a0) Create stream I0428 10:59:20.607393 6 log.go:172] (0xc0000eaf20) (0xc00045a0a0) Stream added, broadcasting: 1 I0428 10:59:20.609763 6 log.go:172] (0xc0000eaf20) Reply frame received for 1 I0428 10:59:20.609819 6 log.go:172] (0xc0000eaf20) (0xc0001a6be0) Create stream I0428 10:59:20.609840 6 log.go:172] (0xc0000eaf20) (0xc0001a6be0) Stream added, broadcasting: 3 I0428 10:59:20.610819 6 log.go:172] (0xc0000eaf20) Reply frame received for 3 I0428 10:59:20.610865 6 log.go:172] (0xc0000eaf20) (0xc0001a6d20) Create stream I0428 10:59:20.610881 6 log.go:172] (0xc0000eaf20) (0xc0001a6d20) Stream added, broadcasting: 5 I0428 10:59:20.611780 6 log.go:172] (0xc0000eaf20) Reply frame received for 5 I0428 10:59:20.681571 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0428 10:59:20.681609 6 log.go:172] (0xc0001a6be0) (3) Data frame handling I0428 10:59:20.681640 6 log.go:172] (0xc0001a6be0) (3) Data frame sent I0428 10:59:20.681658 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0428 10:59:20.681671 6 log.go:172] (0xc0001a6be0) (3) Data frame handling I0428 10:59:20.681839 6 log.go:172] (0xc0000eaf20) Data frame received for 5 I0428 10:59:20.681866 6 log.go:172] (0xc0001a6d20) (5) Data frame handling I0428 10:59:20.683659 6 log.go:172] (0xc0000eaf20) Data frame received for 1 I0428 10:59:20.683684 6 log.go:172] (0xc00045a0a0) (1) Data frame handling I0428 10:59:20.683694 6 log.go:172] (0xc00045a0a0) (1) Data frame sent I0428 10:59:20.683707 6 log.go:172] (0xc0000eaf20) (0xc00045a0a0) Stream removed, broadcasting: 1 I0428 10:59:20.683718 6 log.go:172] (0xc0000eaf20) Go away received I0428 10:59:20.683880 6 log.go:172] (0xc0000eaf20) (0xc00045a0a0) Stream removed, broadcasting: 1 I0428 10:59:20.683929 6 log.go:172] (0xc0000eaf20) (0xc0001a6be0) Stream removed, broadcasting: 3 I0428 10:59:20.683955 6 log.go:172] (0xc0000eaf20) (0xc0001a6d20) Stream removed, broadcasting: 5 Apr 28 10:59:20.683: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 10:59:20.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-8dq5c" for this suite. Apr 28 10:59:44.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 10:59:44.742: INFO: namespace: e2e-tests-pod-network-test-8dq5c, resource: bindings, ignored listing per whitelist Apr 28 10:59:44.776: INFO: namespace e2e-tests-pod-network-test-8dq5c deletion completed in 24.087844387s • [SLOW TEST:46.582 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 10:59:44.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 10:59:44.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-52kcr' Apr 28 10:59:47.219: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 28 10:59:47.219: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Apr 28 10:59:47.231: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 28 10:59:47.241: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 28 10:59:47.311: INFO: scanned /root for discovery docs: Apr 28 10:59:47.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-52kcr' Apr 28 11:00:03.144: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 28 11:00:03.144: INFO: stdout: "Created e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6\nScaling up e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 28 11:00:03.144: INFO: stdout: "Created e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6\nScaling up e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 28 11:00:03.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-52kcr' Apr 28 11:00:03.254: INFO: stderr: "" Apr 28 11:00:03.254: INFO: stdout: "e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6-m68h8 " Apr 28 11:00:03.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6-m68h8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-52kcr' Apr 28 11:00:03.347: INFO: stderr: "" Apr 28 11:00:03.347: INFO: stdout: "true" Apr 28 11:00:03.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6-m68h8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-52kcr' Apr 28 11:00:03.437: INFO: stderr: "" Apr 28 11:00:03.437: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 28 11:00:03.437: INFO: e2e-test-nginx-rc-2e86087b89bd8a84098671dcc4d75de6-m68h8 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Apr 28 11:00:03.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-52kcr' Apr 28 11:00:03.616: INFO: stderr: "" Apr 28 11:00:03.616: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:00:03.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-52kcr" for this suite. Apr 28 11:00:25.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:00:25.662: INFO: namespace: e2e-tests-kubectl-52kcr, resource: bindings, ignored listing per whitelist Apr 28 11:00:25.715: INFO: namespace e2e-tests-kubectl-52kcr deletion completed in 22.089394963s • [SLOW TEST:40.938 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:00:25.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 28 11:00:25.818: INFO: Waiting up to 5m0s for pod "pod-76d96b3b-893f-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-qcs5f" to be "success or failure" Apr 28 11:00:25.839: INFO: Pod "pod-76d96b3b-893f-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.210716ms Apr 28 11:00:27.844: INFO: Pod "pod-76d96b3b-893f-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025735053s Apr 28 11:00:29.848: INFO: Pod "pod-76d96b3b-893f-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029701563s STEP: Saw pod success Apr 28 11:00:29.848: INFO: Pod "pod-76d96b3b-893f-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:00:29.851: INFO: Trying to get logs from node hunter-worker2 pod pod-76d96b3b-893f-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:00:30.120: INFO: Waiting for pod pod-76d96b3b-893f-11ea-80e8-0242ac11000f to disappear Apr 28 11:00:30.134: INFO: Pod pod-76d96b3b-893f-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:00:30.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qcs5f" for this suite. Apr 28 11:00:36.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:00:36.209: INFO: namespace: e2e-tests-emptydir-qcs5f, resource: bindings, ignored listing per whitelist Apr 28 11:00:36.264: INFO: namespace e2e-tests-emptydir-qcs5f deletion completed in 6.125736885s • [SLOW TEST:10.549 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:00:36.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Apr 28 11:00:36.368: INFO: Waiting up to 5m0s for pod "var-expansion-7d1fb7c5-893f-11ea-80e8-0242ac11000f" in namespace "e2e-tests-var-expansion-hlgmg" to be "success or failure" Apr 28 11:00:36.380: INFO: Pod "var-expansion-7d1fb7c5-893f-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.671518ms Apr 28 11:00:38.384: INFO: Pod "var-expansion-7d1fb7c5-893f-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01607187s Apr 28 11:00:40.392: INFO: Pod "var-expansion-7d1fb7c5-893f-11ea-80e8-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.023582165s Apr 28 11:00:42.396: INFO: Pod "var-expansion-7d1fb7c5-893f-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028167068s STEP: Saw pod success Apr 28 11:00:42.396: INFO: Pod "var-expansion-7d1fb7c5-893f-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:00:42.399: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-7d1fb7c5-893f-11ea-80e8-0242ac11000f container dapi-container: STEP: delete the pod Apr 28 11:00:42.417: INFO: Waiting for pod var-expansion-7d1fb7c5-893f-11ea-80e8-0242ac11000f to disappear Apr 28 11:00:42.422: INFO: Pod var-expansion-7d1fb7c5-893f-11ea-80e8-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:00:42.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-hlgmg" for this suite. Apr 28 11:00:48.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:00:48.512: INFO: namespace: e2e-tests-var-expansion-hlgmg, resource: bindings, ignored listing per whitelist Apr 28 11:00:48.512: INFO: namespace e2e-tests-var-expansion-hlgmg deletion completed in 6.087287205s • [SLOW TEST:12.247 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:00:48.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 11:00:48.615: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:00:52.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-5bb5r" for this suite. Apr 28 11:01:42.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:01:42.777: INFO: namespace: e2e-tests-pods-5bb5r, resource: bindings, ignored listing per whitelist Apr 28 11:01:42.781: INFO: namespace e2e-tests-pods-5bb5r deletion completed in 50.088159351s • [SLOW TEST:54.269 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:01:42.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 11:01:42.917: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 7.127812ms) Apr 28 11:01:42.920: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.765012ms) Apr 28 11:01:42.924: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.522076ms) Apr 28 11:01:42.927: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.282128ms) Apr 28 11:01:42.935: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 7.820956ms) Apr 28 11:01:42.939: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.097116ms) Apr 28 11:01:42.942: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.61471ms) Apr 28 11:01:42.944: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.184078ms) Apr 28 11:01:42.946: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.269643ms) Apr 28 11:01:42.949: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.202982ms) Apr 28 11:01:42.951: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.258245ms) Apr 28 11:01:42.954: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.663645ms) Apr 28 11:01:42.956: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.669519ms) Apr 28 11:01:42.959: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.711338ms) Apr 28 11:01:42.962: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.660316ms) Apr 28 11:01:42.965: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.505053ms) Apr 28 11:01:42.968: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.615754ms) Apr 28 11:01:42.970: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.40529ms) Apr 28 11:01:42.973: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.952214ms) Apr 28 11:01:42.976: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.636699ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:01:42.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-mxn9n" for this suite. Apr 28 11:01:49.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:01:49.079: INFO: namespace: e2e-tests-proxy-mxn9n, resource: bindings, ignored listing per whitelist Apr 28 11:01:49.115: INFO: namespace e2e-tests-proxy-mxn9n deletion completed in 6.088839985s • [SLOW TEST:6.334 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:01:49.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-2fxfn I0428 11:01:49.197259 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-2fxfn, replica count: 1 I0428 11:01:50.247608 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 11:01:51.247840 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 11:01:52.248102 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 11:01:53.248349 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 28 11:01:53.378: INFO: Created: latency-svc-nk7jn Apr 28 11:01:53.408: INFO: Got endpoints: latency-svc-nk7jn [59.471506ms] Apr 28 11:01:53.448: INFO: Created: latency-svc-ks8ws Apr 28 11:01:53.460: INFO: Got endpoints: latency-svc-ks8ws [51.809728ms] Apr 28 11:01:53.482: INFO: Created: latency-svc-4kff2 Apr 28 11:01:53.496: INFO: Got endpoints: latency-svc-4kff2 [88.427685ms] Apr 28 11:01:53.541: INFO: Created: latency-svc-f5zz9 Apr 28 11:01:53.579: INFO: Got endpoints: latency-svc-f5zz9 [170.563594ms] Apr 28 11:01:53.624: INFO: Created: latency-svc-92qcx Apr 28 11:01:53.640: INFO: Got endpoints: latency-svc-92qcx [230.868211ms] Apr 28 11:01:53.686: INFO: Created: latency-svc-nrq7t Apr 28 11:01:53.699: INFO: Got endpoints: latency-svc-nrq7t [289.736178ms] Apr 28 11:01:53.741: INFO: Created: latency-svc-4ccfk Apr 28 11:01:53.754: INFO: Got endpoints: latency-svc-4ccfk [345.586703ms] Apr 28 11:01:53.777: INFO: Created: latency-svc-sxwcq Apr 28 11:01:53.811: INFO: Got endpoints: latency-svc-sxwcq [401.725788ms] Apr 28 11:01:53.821: INFO: Created: latency-svc-ldjtd Apr 28 11:01:53.839: INFO: Got endpoints: latency-svc-ldjtd [430.168067ms] Apr 28 11:01:53.857: INFO: Created: latency-svc-zwlzk Apr 28 11:01:53.875: INFO: Got endpoints: latency-svc-zwlzk [466.490492ms] Apr 28 11:01:53.900: INFO: Created: latency-svc-tfwdg Apr 28 11:01:53.960: INFO: Got endpoints: latency-svc-tfwdg [551.323989ms] Apr 28 11:01:53.963: INFO: Created: latency-svc-d7m4x Apr 28 11:01:53.972: INFO: Got endpoints: latency-svc-d7m4x [562.652148ms] Apr 28 11:01:54.000: INFO: Created: latency-svc-zwff7 Apr 28 11:01:54.008: INFO: Got endpoints: latency-svc-zwff7 [598.723412ms] Apr 28 11:01:54.035: INFO: Created: latency-svc-k96w7 Apr 28 11:01:54.044: INFO: Got endpoints: latency-svc-k96w7 [635.125806ms] Apr 28 11:01:54.099: INFO: Created: latency-svc-7mwpw Apr 28 11:01:54.101: INFO: Got endpoints: latency-svc-7mwpw [692.365549ms] Apr 28 11:01:54.127: INFO: Created: latency-svc-5v9kd Apr 28 11:01:54.145: INFO: Got endpoints: latency-svc-5v9kd [736.160071ms] Apr 28 11:01:54.164: INFO: Created: latency-svc-w86wq Apr 28 11:01:54.177: INFO: Got endpoints: latency-svc-w86wq [717.097383ms] Apr 28 11:01:54.198: INFO: Created: latency-svc-jzndl Apr 28 11:01:54.235: INFO: Got endpoints: latency-svc-jzndl [739.085936ms] Apr 28 11:01:54.251: INFO: Created: latency-svc-9fcvk Apr 28 11:01:54.261: INFO: Got endpoints: latency-svc-9fcvk [682.142503ms] Apr 28 11:01:54.291: INFO: Created: latency-svc-gxtrg Apr 28 11:01:54.326: INFO: Created: latency-svc-d8ddw Apr 28 11:01:54.326: INFO: Got endpoints: latency-svc-gxtrg [686.150441ms] Apr 28 11:01:54.373: INFO: Got endpoints: latency-svc-d8ddw [674.370215ms] Apr 28 11:01:54.380: INFO: Created: latency-svc-9867v Apr 28 11:01:54.394: INFO: Got endpoints: latency-svc-9867v [639.645918ms] Apr 28 11:01:54.420: INFO: Created: latency-svc-nw72s Apr 28 11:01:54.430: INFO: Got endpoints: latency-svc-nw72s [619.368894ms] Apr 28 11:01:54.450: INFO: Created: latency-svc-4bjzq Apr 28 11:01:54.460: INFO: Got endpoints: latency-svc-4bjzq [621.280848ms] Apr 28 11:01:54.518: INFO: Created: latency-svc-kkvkr Apr 28 11:01:54.520: INFO: Got endpoints: latency-svc-kkvkr [644.644596ms] Apr 28 11:01:54.547: INFO: Created: latency-svc-tkfnm Apr 28 11:01:54.563: INFO: Got endpoints: latency-svc-tkfnm [602.512938ms] Apr 28 11:01:54.611: INFO: Created: latency-svc-q9nvp Apr 28 11:01:54.655: INFO: Got endpoints: latency-svc-q9nvp [683.077144ms] Apr 28 11:01:54.671: INFO: Created: latency-svc-lgcvk Apr 28 11:01:54.683: INFO: Got endpoints: latency-svc-lgcvk [675.70424ms] Apr 28 11:01:54.703: INFO: Created: latency-svc-4nn9l Apr 28 11:01:54.720: INFO: Got endpoints: latency-svc-4nn9l [675.991289ms] Apr 28 11:01:54.806: INFO: Created: latency-svc-zkz7p Apr 28 11:01:54.809: INFO: Got endpoints: latency-svc-zkz7p [707.339236ms] Apr 28 11:01:54.863: INFO: Created: latency-svc-spd94 Apr 28 11:01:54.876: INFO: Got endpoints: latency-svc-spd94 [730.826098ms] Apr 28 11:01:54.949: INFO: Created: latency-svc-5lptz Apr 28 11:01:54.970: INFO: Got endpoints: latency-svc-5lptz [793.21404ms] Apr 28 11:01:55.014: INFO: Created: latency-svc-xhqnf Apr 28 11:01:55.116: INFO: Got endpoints: latency-svc-xhqnf [880.363469ms] Apr 28 11:01:55.142: INFO: Created: latency-svc-mwk8s Apr 28 11:01:55.159: INFO: Got endpoints: latency-svc-mwk8s [897.619332ms] Apr 28 11:01:55.193: INFO: Created: latency-svc-hp9ck Apr 28 11:01:55.207: INFO: Got endpoints: latency-svc-hp9ck [880.830935ms] Apr 28 11:01:55.296: INFO: Created: latency-svc-z6qgx Apr 28 11:01:55.299: INFO: Got endpoints: latency-svc-z6qgx [925.856995ms] Apr 28 11:01:55.321: INFO: Created: latency-svc-8grgl Apr 28 11:01:55.340: INFO: Got endpoints: latency-svc-8grgl [945.62102ms] Apr 28 11:01:55.363: INFO: Created: latency-svc-j4k6n Apr 28 11:01:55.375: INFO: Got endpoints: latency-svc-j4k6n [945.288176ms] Apr 28 11:01:55.440: INFO: Created: latency-svc-sc6n7 Apr 28 11:01:55.463: INFO: Created: latency-svc-7x9m2 Apr 28 11:01:55.463: INFO: Got endpoints: latency-svc-sc6n7 [1.002612368s] Apr 28 11:01:55.472: INFO: Got endpoints: latency-svc-7x9m2 [951.433543ms] Apr 28 11:01:55.493: INFO: Created: latency-svc-8kcpv Apr 28 11:01:55.502: INFO: Got endpoints: latency-svc-8kcpv [939.052048ms] Apr 28 11:01:55.531: INFO: Created: latency-svc-ld9kg Apr 28 11:01:55.595: INFO: Got endpoints: latency-svc-ld9kg [940.027703ms] Apr 28 11:01:55.597: INFO: Created: latency-svc-d7k4w Apr 28 11:01:55.606: INFO: Got endpoints: latency-svc-d7k4w [922.194905ms] Apr 28 11:01:55.643: INFO: Created: latency-svc-d64kr Apr 28 11:01:55.659: INFO: Got endpoints: latency-svc-d64kr [938.807713ms] Apr 28 11:01:55.745: INFO: Created: latency-svc-bmvfx Apr 28 11:01:55.748: INFO: Got endpoints: latency-svc-bmvfx [939.5275ms] Apr 28 11:01:55.772: INFO: Created: latency-svc-s7pq4 Apr 28 11:01:55.786: INFO: Got endpoints: latency-svc-s7pq4 [909.425622ms] Apr 28 11:01:55.808: INFO: Created: latency-svc-bcr5p Apr 28 11:01:55.822: INFO: Got endpoints: latency-svc-bcr5p [851.286161ms] Apr 28 11:01:55.847: INFO: Created: latency-svc-p9vc5 Apr 28 11:01:55.882: INFO: Got endpoints: latency-svc-p9vc5 [766.356415ms] Apr 28 11:01:55.895: INFO: Created: latency-svc-ggw9s Apr 28 11:01:55.912: INFO: Got endpoints: latency-svc-ggw9s [753.321069ms] Apr 28 11:01:55.931: INFO: Created: latency-svc-nwjrb Apr 28 11:01:55.949: INFO: Got endpoints: latency-svc-nwjrb [741.922132ms] Apr 28 11:01:56.033: INFO: Created: latency-svc-n8tjd Apr 28 11:01:56.036: INFO: Got endpoints: latency-svc-n8tjd [736.908967ms] Apr 28 11:01:56.093: INFO: Created: latency-svc-m4p8c Apr 28 11:01:56.111: INFO: Got endpoints: latency-svc-m4p8c [771.342448ms] Apr 28 11:01:57.776: INFO: Created: latency-svc-q86nh Apr 28 11:01:57.859: INFO: Got endpoints: latency-svc-q86nh [2.48324712s] Apr 28 11:01:57.887: INFO: Created: latency-svc-nk8zq Apr 28 11:01:57.916: INFO: Got endpoints: latency-svc-nk8zq [2.45286989s] Apr 28 11:01:57.959: INFO: Created: latency-svc-d72n6 Apr 28 11:01:58.009: INFO: Got endpoints: latency-svc-d72n6 [2.537637428s] Apr 28 11:01:58.040: INFO: Created: latency-svc-dcmdh Apr 28 11:01:58.066: INFO: Got endpoints: latency-svc-dcmdh [2.564049773s] Apr 28 11:01:58.134: INFO: Created: latency-svc-sr4x7 Apr 28 11:01:58.137: INFO: Got endpoints: latency-svc-sr4x7 [2.541587374s] Apr 28 11:01:58.186: INFO: Created: latency-svc-zsdnn Apr 28 11:01:58.203: INFO: Got endpoints: latency-svc-zsdnn [2.596742783s] Apr 28 11:01:58.272: INFO: Created: latency-svc-2lpjz Apr 28 11:01:58.281: INFO: Got endpoints: latency-svc-2lpjz [2.622043527s] Apr 28 11:01:58.305: INFO: Created: latency-svc-bv4nm Apr 28 11:01:58.317: INFO: Got endpoints: latency-svc-bv4nm [2.568465852s] Apr 28 11:01:58.348: INFO: Created: latency-svc-wfswk Apr 28 11:01:58.365: INFO: Got endpoints: latency-svc-wfswk [2.579534361s] Apr 28 11:01:58.427: INFO: Created: latency-svc-8jf6p Apr 28 11:01:58.430: INFO: Got endpoints: latency-svc-8jf6p [2.608193445s] Apr 28 11:01:58.465: INFO: Created: latency-svc-7jxh6 Apr 28 11:01:58.480: INFO: Got endpoints: latency-svc-7jxh6 [2.597516981s] Apr 28 11:01:58.502: INFO: Created: latency-svc-pfpg2 Apr 28 11:01:58.510: INFO: Got endpoints: latency-svc-pfpg2 [79.855092ms] Apr 28 11:01:58.571: INFO: Created: latency-svc-5d9jp Apr 28 11:01:58.573: INFO: Got endpoints: latency-svc-5d9jp [2.660723718s] Apr 28 11:01:58.646: INFO: Created: latency-svc-psrtd Apr 28 11:01:58.661: INFO: Got endpoints: latency-svc-psrtd [2.711692827s] Apr 28 11:01:58.703: INFO: Created: latency-svc-mnqfs Apr 28 11:01:58.718: INFO: Got endpoints: latency-svc-mnqfs [2.681567135s] Apr 28 11:01:58.768: INFO: Created: latency-svc-2t4s7 Apr 28 11:01:58.775: INFO: Got endpoints: latency-svc-2t4s7 [2.663975222s] Apr 28 11:01:58.835: INFO: Created: latency-svc-ck5vl Apr 28 11:01:58.838: INFO: Got endpoints: latency-svc-ck5vl [978.812514ms] Apr 28 11:01:58.864: INFO: Created: latency-svc-wvl6x Apr 28 11:01:58.872: INFO: Got endpoints: latency-svc-wvl6x [955.447032ms] Apr 28 11:01:58.892: INFO: Created: latency-svc-7srdx Apr 28 11:01:58.917: INFO: Got endpoints: latency-svc-7srdx [907.965605ms] Apr 28 11:01:58.979: INFO: Created: latency-svc-wcpxd Apr 28 11:01:58.981: INFO: Got endpoints: latency-svc-wcpxd [915.235172ms] Apr 28 11:01:59.014: INFO: Created: latency-svc-vq7xx Apr 28 11:01:59.034: INFO: Got endpoints: latency-svc-vq7xx [897.606572ms] Apr 28 11:01:59.135: INFO: Created: latency-svc-j8692 Apr 28 11:01:59.141: INFO: Got endpoints: latency-svc-j8692 [938.514788ms] Apr 28 11:01:59.218: INFO: Created: latency-svc-hk4dg Apr 28 11:01:59.232: INFO: Got endpoints: latency-svc-hk4dg [951.206316ms] Apr 28 11:01:59.318: INFO: Created: latency-svc-w6cg9 Apr 28 11:01:59.341: INFO: Got endpoints: latency-svc-w6cg9 [1.024406015s] Apr 28 11:01:59.360: INFO: Created: latency-svc-kf6b2 Apr 28 11:01:59.371: INFO: Got endpoints: latency-svc-kf6b2 [1.00564885s] Apr 28 11:01:59.428: INFO: Created: latency-svc-674sm Apr 28 11:01:59.430: INFO: Got endpoints: latency-svc-674sm [949.637173ms] Apr 28 11:01:59.452: INFO: Created: latency-svc-6c2ct Apr 28 11:01:59.467: INFO: Got endpoints: latency-svc-6c2ct [957.413474ms] Apr 28 11:01:59.493: INFO: Created: latency-svc-6f9dt Apr 28 11:01:59.510: INFO: Got endpoints: latency-svc-6f9dt [936.685042ms] Apr 28 11:01:59.571: INFO: Created: latency-svc-grrmw Apr 28 11:01:59.574: INFO: Got endpoints: latency-svc-grrmw [913.787182ms] Apr 28 11:01:59.599: INFO: Created: latency-svc-ntlpt Apr 28 11:01:59.612: INFO: Got endpoints: latency-svc-ntlpt [894.646637ms] Apr 28 11:01:59.632: INFO: Created: latency-svc-pmjxw Apr 28 11:01:59.649: INFO: Got endpoints: latency-svc-pmjxw [873.615499ms] Apr 28 11:01:59.703: INFO: Created: latency-svc-lhpmk Apr 28 11:01:59.708: INFO: Got endpoints: latency-svc-lhpmk [870.148585ms] Apr 28 11:01:59.779: INFO: Created: latency-svc-hbmsf Apr 28 11:01:59.799: INFO: Got endpoints: latency-svc-hbmsf [927.34048ms] Apr 28 11:01:59.859: INFO: Created: latency-svc-6q9cm Apr 28 11:01:59.861: INFO: Got endpoints: latency-svc-6q9cm [943.877877ms] Apr 28 11:01:59.884: INFO: Created: latency-svc-s5dhh Apr 28 11:01:59.902: INFO: Got endpoints: latency-svc-s5dhh [920.086121ms] Apr 28 11:01:59.926: INFO: Created: latency-svc-8564k Apr 28 11:01:59.938: INFO: Got endpoints: latency-svc-8564k [903.280887ms] Apr 28 11:02:00.014: INFO: Created: latency-svc-sp5bp Apr 28 11:02:00.016: INFO: Got endpoints: latency-svc-sp5bp [875.356445ms] Apr 28 11:02:00.043: INFO: Created: latency-svc-d7ctj Apr 28 11:02:00.058: INFO: Got endpoints: latency-svc-d7ctj [825.626722ms] Apr 28 11:02:00.082: INFO: Created: latency-svc-z7glj Apr 28 11:02:00.094: INFO: Got endpoints: latency-svc-z7glj [752.962561ms] Apr 28 11:02:00.158: INFO: Created: latency-svc-dkxvh Apr 28 11:02:00.160: INFO: Got endpoints: latency-svc-dkxvh [789.11328ms] Apr 28 11:02:00.188: INFO: Created: latency-svc-txw5w Apr 28 11:02:00.203: INFO: Got endpoints: latency-svc-txw5w [773.021888ms] Apr 28 11:02:00.241: INFO: Created: latency-svc-hwmvz Apr 28 11:02:00.251: INFO: Got endpoints: latency-svc-hwmvz [783.554645ms] Apr 28 11:02:00.290: INFO: Created: latency-svc-mwr6b Apr 28 11:02:00.292: INFO: Got endpoints: latency-svc-mwr6b [782.320812ms] Apr 28 11:02:00.346: INFO: Created: latency-svc-78qrc Apr 28 11:02:00.360: INFO: Got endpoints: latency-svc-78qrc [785.448333ms] Apr 28 11:02:00.445: INFO: Created: latency-svc-ddw7z Apr 28 11:02:00.448: INFO: Got endpoints: latency-svc-ddw7z [836.136381ms] Apr 28 11:02:00.476: INFO: Created: latency-svc-kdrkg Apr 28 11:02:00.532: INFO: Got endpoints: latency-svc-kdrkg [882.777419ms] Apr 28 11:02:00.532: INFO: Created: latency-svc-svxg8 Apr 28 11:02:00.589: INFO: Got endpoints: latency-svc-svxg8 [880.893423ms] Apr 28 11:02:00.607: INFO: Created: latency-svc-98qlb Apr 28 11:02:00.626: INFO: Got endpoints: latency-svc-98qlb [827.114552ms] Apr 28 11:02:00.645: INFO: Created: latency-svc-qm8ww Apr 28 11:02:00.655: INFO: Got endpoints: latency-svc-qm8ww [793.139333ms] Apr 28 11:02:00.682: INFO: Created: latency-svc-9gm4c Apr 28 11:02:00.750: INFO: Got endpoints: latency-svc-9gm4c [848.673271ms] Apr 28 11:02:00.752: INFO: Created: latency-svc-8hrch Apr 28 11:02:00.781: INFO: Got endpoints: latency-svc-8hrch [843.576766ms] Apr 28 11:02:00.811: INFO: Created: latency-svc-qfvb7 Apr 28 11:02:00.842: INFO: Got endpoints: latency-svc-qfvb7 [824.984211ms] Apr 28 11:02:00.907: INFO: Created: latency-svc-54v2k Apr 28 11:02:00.926: INFO: Got endpoints: latency-svc-54v2k [868.009953ms] Apr 28 11:02:00.946: INFO: Created: latency-svc-rkk9s Apr 28 11:02:00.962: INFO: Got endpoints: latency-svc-rkk9s [867.496249ms] Apr 28 11:02:00.982: INFO: Created: latency-svc-84wr9 Apr 28 11:02:01.098: INFO: Got endpoints: latency-svc-84wr9 [937.803702ms] Apr 28 11:02:01.111: INFO: Created: latency-svc-cncgb Apr 28 11:02:01.131: INFO: Got endpoints: latency-svc-cncgb [928.519232ms] Apr 28 11:02:01.162: INFO: Created: latency-svc-k8r6k Apr 28 11:02:01.179: INFO: Got endpoints: latency-svc-k8r6k [928.526236ms] Apr 28 11:02:01.272: INFO: Created: latency-svc-lmp8z Apr 28 11:02:01.288: INFO: Got endpoints: latency-svc-lmp8z [995.477733ms] Apr 28 11:02:01.315: INFO: Created: latency-svc-wcxxp Apr 28 11:02:01.330: INFO: Got endpoints: latency-svc-wcxxp [970.326159ms] Apr 28 11:02:01.364: INFO: Created: latency-svc-cgfdj Apr 28 11:02:01.409: INFO: Got endpoints: latency-svc-cgfdj [960.611299ms] Apr 28 11:02:01.419: INFO: Created: latency-svc-62rdz Apr 28 11:02:01.433: INFO: Got endpoints: latency-svc-62rdz [900.890818ms] Apr 28 11:02:01.455: INFO: Created: latency-svc-zr6gp Apr 28 11:02:01.479: INFO: Got endpoints: latency-svc-zr6gp [890.442582ms] Apr 28 11:02:01.559: INFO: Created: latency-svc-tkf9g Apr 28 11:02:01.562: INFO: Got endpoints: latency-svc-tkf9g [935.752157ms] Apr 28 11:02:01.591: INFO: Created: latency-svc-msfmn Apr 28 11:02:01.607: INFO: Got endpoints: latency-svc-msfmn [952.581067ms] Apr 28 11:02:01.627: INFO: Created: latency-svc-47ld4 Apr 28 11:02:01.643: INFO: Got endpoints: latency-svc-47ld4 [892.996421ms] Apr 28 11:02:01.703: INFO: Created: latency-svc-74qvx Apr 28 11:02:01.719: INFO: Got endpoints: latency-svc-74qvx [937.655877ms] Apr 28 11:02:01.755: INFO: Created: latency-svc-8djwt Apr 28 11:02:01.770: INFO: Got endpoints: latency-svc-8djwt [928.43501ms] Apr 28 11:02:01.795: INFO: Created: latency-svc-b995d Apr 28 11:02:01.853: INFO: Got endpoints: latency-svc-b995d [926.767975ms] Apr 28 11:02:01.876: INFO: Created: latency-svc-f8xht Apr 28 11:02:01.890: INFO: Got endpoints: latency-svc-f8xht [928.160696ms] Apr 28 11:02:01.911: INFO: Created: latency-svc-tmcw7 Apr 28 11:02:01.920: INFO: Got endpoints: latency-svc-tmcw7 [822.459562ms] Apr 28 11:02:01.942: INFO: Created: latency-svc-8hjtm Apr 28 11:02:01.951: INFO: Got endpoints: latency-svc-8hjtm [819.853331ms] Apr 28 11:02:02.002: INFO: Created: latency-svc-4wvwk Apr 28 11:02:02.011: INFO: Got endpoints: latency-svc-4wvwk [831.454959ms] Apr 28 11:02:02.031: INFO: Created: latency-svc-t7vr5 Apr 28 11:02:02.041: INFO: Got endpoints: latency-svc-t7vr5 [753.713916ms] Apr 28 11:02:02.067: INFO: Created: latency-svc-pv2s6 Apr 28 11:02:02.084: INFO: Got endpoints: latency-svc-pv2s6 [753.248016ms] Apr 28 11:02:02.140: INFO: Created: latency-svc-4p7rh Apr 28 11:02:02.144: INFO: Got endpoints: latency-svc-4p7rh [734.500091ms] Apr 28 11:02:02.191: INFO: Created: latency-svc-6z6d9 Apr 28 11:02:02.204: INFO: Got endpoints: latency-svc-6z6d9 [771.65238ms] Apr 28 11:02:02.233: INFO: Created: latency-svc-jlnbk Apr 28 11:02:02.272: INFO: Got endpoints: latency-svc-jlnbk [792.818013ms] Apr 28 11:02:02.308: INFO: Created: latency-svc-n6kx2 Apr 28 11:02:02.325: INFO: Got endpoints: latency-svc-n6kx2 [762.974233ms] Apr 28 11:02:02.356: INFO: Created: latency-svc-c5h5v Apr 28 11:02:02.403: INFO: Got endpoints: latency-svc-c5h5v [796.017643ms] Apr 28 11:02:02.431: INFO: Created: latency-svc-wcmcf Apr 28 11:02:02.446: INFO: Got endpoints: latency-svc-wcmcf [802.172313ms] Apr 28 11:02:02.475: INFO: Created: latency-svc-qcb6n Apr 28 11:02:02.488: INFO: Got endpoints: latency-svc-qcb6n [768.598709ms] Apr 28 11:02:02.548: INFO: Created: latency-svc-fhsxd Apr 28 11:02:02.550: INFO: Got endpoints: latency-svc-fhsxd [780.057643ms] Apr 28 11:02:02.602: INFO: Created: latency-svc-lg22z Apr 28 11:02:02.632: INFO: Got endpoints: latency-svc-lg22z [779.189511ms] Apr 28 11:02:02.703: INFO: Created: latency-svc-6ct7b Apr 28 11:02:02.716: INFO: Got endpoints: latency-svc-6ct7b [825.766331ms] Apr 28 11:02:02.743: INFO: Created: latency-svc-jjmmz Apr 28 11:02:02.758: INFO: Got endpoints: latency-svc-jjmmz [837.81725ms] Apr 28 11:02:02.799: INFO: Created: latency-svc-84sx5 Apr 28 11:02:02.865: INFO: Got endpoints: latency-svc-84sx5 [913.8832ms] Apr 28 11:02:02.867: INFO: Created: latency-svc-9hxhz Apr 28 11:02:02.879: INFO: Got endpoints: latency-svc-9hxhz [867.783856ms] Apr 28 11:02:02.905: INFO: Created: latency-svc-ct2m4 Apr 28 11:02:02.921: INFO: Got endpoints: latency-svc-ct2m4 [879.815979ms] Apr 28 11:02:02.941: INFO: Created: latency-svc-ws7h4 Apr 28 11:02:02.958: INFO: Got endpoints: latency-svc-ws7h4 [874.08283ms] Apr 28 11:02:03.011: INFO: Created: latency-svc-7n458 Apr 28 11:02:03.017: INFO: Got endpoints: latency-svc-7n458 [873.66962ms] Apr 28 11:02:03.058: INFO: Created: latency-svc-4kpdh Apr 28 11:02:03.072: INFO: Got endpoints: latency-svc-4kpdh [867.503835ms] Apr 28 11:02:03.158: INFO: Created: latency-svc-vcg68 Apr 28 11:02:03.160: INFO: Got endpoints: latency-svc-vcg68 [888.29004ms] Apr 28 11:02:03.190: INFO: Created: latency-svc-xc7mg Apr 28 11:02:03.205: INFO: Got endpoints: latency-svc-xc7mg [879.950155ms] Apr 28 11:02:03.226: INFO: Created: latency-svc-bp7vw Apr 28 11:02:03.234: INFO: Got endpoints: latency-svc-bp7vw [830.907605ms] Apr 28 11:02:03.308: INFO: Created: latency-svc-q6rrv Apr 28 11:02:03.319: INFO: Got endpoints: latency-svc-q6rrv [873.029623ms] Apr 28 11:02:03.349: INFO: Created: latency-svc-s2bfb Apr 28 11:02:03.367: INFO: Got endpoints: latency-svc-s2bfb [879.632612ms] Apr 28 11:02:03.391: INFO: Created: latency-svc-gj52x Apr 28 11:02:03.404: INFO: Got endpoints: latency-svc-gj52x [853.37311ms] Apr 28 11:02:03.452: INFO: Created: latency-svc-lqcsm Apr 28 11:02:03.483: INFO: Created: latency-svc-rtzkm Apr 28 11:02:03.519: INFO: Got endpoints: latency-svc-lqcsm [887.208997ms] Apr 28 11:02:03.520: INFO: Created: latency-svc-kg8v2 Apr 28 11:02:03.530: INFO: Got endpoints: latency-svc-kg8v2 [771.615339ms] Apr 28 11:02:03.589: INFO: Created: latency-svc-dr6kw Apr 28 11:02:03.589: INFO: Got endpoints: latency-svc-rtzkm [873.24475ms] Apr 28 11:02:03.592: INFO: Got endpoints: latency-svc-dr6kw [726.869685ms] Apr 28 11:02:03.645: INFO: Created: latency-svc-8zjcx Apr 28 11:02:03.663: INFO: Got endpoints: latency-svc-8zjcx [783.870233ms] Apr 28 11:02:03.687: INFO: Created: latency-svc-dzpw5 Apr 28 11:02:03.756: INFO: Got endpoints: latency-svc-dzpw5 [835.178364ms] Apr 28 11:02:03.759: INFO: Created: latency-svc-b9pss Apr 28 11:02:03.765: INFO: Got endpoints: latency-svc-b9pss [807.021726ms] Apr 28 11:02:03.797: INFO: Created: latency-svc-tq2jp Apr 28 11:02:03.808: INFO: Got endpoints: latency-svc-tq2jp [790.344203ms] Apr 28 11:02:03.835: INFO: Created: latency-svc-wrw8j Apr 28 11:02:03.850: INFO: Got endpoints: latency-svc-wrw8j [777.704074ms] Apr 28 11:02:03.914: INFO: Created: latency-svc-xr6xj Apr 28 11:02:03.916: INFO: Got endpoints: latency-svc-xr6xj [755.229924ms] Apr 28 11:02:03.951: INFO: Created: latency-svc-554g8 Apr 28 11:02:03.964: INFO: Got endpoints: latency-svc-554g8 [759.137625ms] Apr 28 11:02:03.991: INFO: Created: latency-svc-8hv9z Apr 28 11:02:04.057: INFO: Got endpoints: latency-svc-8hv9z [823.215757ms] Apr 28 11:02:04.069: INFO: Created: latency-svc-gtj4l Apr 28 11:02:04.083: INFO: Got endpoints: latency-svc-gtj4l [764.003224ms] Apr 28 11:02:04.113: INFO: Created: latency-svc-rn7mx Apr 28 11:02:04.131: INFO: Got endpoints: latency-svc-rn7mx [763.895056ms] Apr 28 11:02:04.200: INFO: Created: latency-svc-2s6g8 Apr 28 11:02:04.203: INFO: Got endpoints: latency-svc-2s6g8 [798.965868ms] Apr 28 11:02:04.255: INFO: Created: latency-svc-g7nnd Apr 28 11:02:04.264: INFO: Got endpoints: latency-svc-g7nnd [744.217685ms] Apr 28 11:02:04.285: INFO: Created: latency-svc-dnrbg Apr 28 11:02:04.294: INFO: Got endpoints: latency-svc-dnrbg [763.602606ms] Apr 28 11:02:04.350: INFO: Created: latency-svc-z7jsw Apr 28 11:02:04.352: INFO: Got endpoints: latency-svc-z7jsw [762.753277ms] Apr 28 11:02:04.377: INFO: Created: latency-svc-lzl6l Apr 28 11:02:04.390: INFO: Got endpoints: latency-svc-lzl6l [798.269203ms] Apr 28 11:02:04.414: INFO: Created: latency-svc-f4c4l Apr 28 11:02:04.427: INFO: Got endpoints: latency-svc-f4c4l [763.769752ms] Apr 28 11:02:04.487: INFO: Created: latency-svc-v96hn Apr 28 11:02:04.490: INFO: Got endpoints: latency-svc-v96hn [733.847933ms] Apr 28 11:02:04.531: INFO: Created: latency-svc-k9224 Apr 28 11:02:04.547: INFO: Got endpoints: latency-svc-k9224 [782.451751ms] Apr 28 11:02:04.569: INFO: Created: latency-svc-mkqbd Apr 28 11:02:04.631: INFO: Got endpoints: latency-svc-mkqbd [822.903819ms] Apr 28 11:02:04.653: INFO: Created: latency-svc-cp2kt Apr 28 11:02:04.674: INFO: Got endpoints: latency-svc-cp2kt [823.962468ms] Apr 28 11:02:04.705: INFO: Created: latency-svc-fjrkh Apr 28 11:02:04.722: INFO: Got endpoints: latency-svc-fjrkh [806.185122ms] Apr 28 11:02:04.777: INFO: Created: latency-svc-rgdwb Apr 28 11:02:04.809: INFO: Got endpoints: latency-svc-rgdwb [844.357721ms] Apr 28 11:02:04.827: INFO: Created: latency-svc-mjsms Apr 28 11:02:04.918: INFO: Got endpoints: latency-svc-mjsms [860.846818ms] Apr 28 11:02:04.935: INFO: Created: latency-svc-t6hxq Apr 28 11:02:04.944: INFO: Got endpoints: latency-svc-t6hxq [861.670222ms] Apr 28 11:02:04.990: INFO: Created: latency-svc-5gp6f Apr 28 11:02:04.999: INFO: Got endpoints: latency-svc-5gp6f [867.378681ms] Apr 28 11:02:05.085: INFO: Created: latency-svc-xvv7r Apr 28 11:02:05.101: INFO: Got endpoints: latency-svc-xvv7r [898.490508ms] Apr 28 11:02:05.141: INFO: Created: latency-svc-fvxjr Apr 28 11:02:05.172: INFO: Got endpoints: latency-svc-fvxjr [908.473955ms] Apr 28 11:02:05.249: INFO: Created: latency-svc-99sqh Apr 28 11:02:05.275: INFO: Got endpoints: latency-svc-99sqh [981.8687ms] Apr 28 11:02:05.301: INFO: Created: latency-svc-hm676 Apr 28 11:02:05.317: INFO: Got endpoints: latency-svc-hm676 [965.33057ms] Apr 28 11:02:05.404: INFO: Created: latency-svc-cfjfs Apr 28 11:02:05.407: INFO: Got endpoints: latency-svc-cfjfs [1.01680566s] Apr 28 11:02:05.484: INFO: Created: latency-svc-fcbgz Apr 28 11:02:05.500: INFO: Got endpoints: latency-svc-fcbgz [1.073406473s] Apr 28 11:02:05.565: INFO: Created: latency-svc-44hs4 Apr 28 11:02:05.567: INFO: Got endpoints: latency-svc-44hs4 [1.077063102s] Apr 28 11:02:05.613: INFO: Created: latency-svc-4t5hq Apr 28 11:02:05.630: INFO: Got endpoints: latency-svc-4t5hq [1.082961867s] Apr 28 11:02:05.652: INFO: Created: latency-svc-rcnzw Apr 28 11:02:05.714: INFO: Got endpoints: latency-svc-rcnzw [1.083647259s] Apr 28 11:02:05.717: INFO: Created: latency-svc-mg5p4 Apr 28 11:02:05.720: INFO: Got endpoints: latency-svc-mg5p4 [1.046691344s] Apr 28 11:02:05.750: INFO: Created: latency-svc-5c59r Apr 28 11:02:05.763: INFO: Got endpoints: latency-svc-5c59r [1.040995553s] Apr 28 11:02:05.799: INFO: Created: latency-svc-bn4cj Apr 28 11:02:05.864: INFO: Got endpoints: latency-svc-bn4cj [1.055498549s] Apr 28 11:02:05.876: INFO: Created: latency-svc-djz7g Apr 28 11:02:05.890: INFO: Got endpoints: latency-svc-djz7g [971.126578ms] Apr 28 11:02:05.931: INFO: Created: latency-svc-r4lbl Apr 28 11:02:05.938: INFO: Got endpoints: latency-svc-r4lbl [993.403346ms] Apr 28 11:02:05.958: INFO: Created: latency-svc-c6qjv Apr 28 11:02:06.032: INFO: Got endpoints: latency-svc-c6qjv [1.032920379s] Apr 28 11:02:06.033: INFO: Created: latency-svc-zcqx7 Apr 28 11:02:06.041: INFO: Got endpoints: latency-svc-zcqx7 [939.448577ms] Apr 28 11:02:06.090: INFO: Created: latency-svc-dpz8c Apr 28 11:02:06.106: INFO: Got endpoints: latency-svc-dpz8c [933.917627ms] Apr 28 11:02:06.176: INFO: Created: latency-svc-zcdfg Apr 28 11:02:06.179: INFO: Got endpoints: latency-svc-zcdfg [903.802452ms] Apr 28 11:02:06.204: INFO: Created: latency-svc-btj6d Apr 28 11:02:06.233: INFO: Got endpoints: latency-svc-btj6d [915.572329ms] Apr 28 11:02:06.254: INFO: Created: latency-svc-gpjwm Apr 28 11:02:06.269: INFO: Got endpoints: latency-svc-gpjwm [862.123161ms] Apr 28 11:02:06.322: INFO: Created: latency-svc-lq2rt Apr 28 11:02:06.335: INFO: Got endpoints: latency-svc-lq2rt [835.26762ms] Apr 28 11:02:06.380: INFO: Created: latency-svc-g7nhl Apr 28 11:02:06.390: INFO: Got endpoints: latency-svc-g7nhl [822.070992ms] Apr 28 11:02:06.458: INFO: Created: latency-svc-4rpw8 Apr 28 11:02:06.462: INFO: Got endpoints: latency-svc-4rpw8 [832.019098ms] Apr 28 11:02:06.462: INFO: Latencies: [51.809728ms 79.855092ms 88.427685ms 170.563594ms 230.868211ms 289.736178ms 345.586703ms 401.725788ms 430.168067ms 466.490492ms 551.323989ms 562.652148ms 598.723412ms 602.512938ms 619.368894ms 621.280848ms 635.125806ms 639.645918ms 644.644596ms 674.370215ms 675.70424ms 675.991289ms 682.142503ms 683.077144ms 686.150441ms 692.365549ms 707.339236ms 717.097383ms 726.869685ms 730.826098ms 733.847933ms 734.500091ms 736.160071ms 736.908967ms 739.085936ms 741.922132ms 744.217685ms 752.962561ms 753.248016ms 753.321069ms 753.713916ms 755.229924ms 759.137625ms 762.753277ms 762.974233ms 763.602606ms 763.769752ms 763.895056ms 764.003224ms 766.356415ms 768.598709ms 771.342448ms 771.615339ms 771.65238ms 773.021888ms 777.704074ms 779.189511ms 780.057643ms 782.320812ms 782.451751ms 783.554645ms 783.870233ms 785.448333ms 789.11328ms 790.344203ms 792.818013ms 793.139333ms 793.21404ms 796.017643ms 798.269203ms 798.965868ms 802.172313ms 806.185122ms 807.021726ms 819.853331ms 822.070992ms 822.459562ms 822.903819ms 823.215757ms 823.962468ms 824.984211ms 825.626722ms 825.766331ms 827.114552ms 830.907605ms 831.454959ms 832.019098ms 835.178364ms 835.26762ms 836.136381ms 837.81725ms 843.576766ms 844.357721ms 848.673271ms 851.286161ms 853.37311ms 860.846818ms 861.670222ms 862.123161ms 867.378681ms 867.496249ms 867.503835ms 867.783856ms 868.009953ms 870.148585ms 873.029623ms 873.24475ms 873.615499ms 873.66962ms 874.08283ms 875.356445ms 879.632612ms 879.815979ms 879.950155ms 880.363469ms 880.830935ms 880.893423ms 882.777419ms 887.208997ms 888.29004ms 890.442582ms 892.996421ms 894.646637ms 897.606572ms 897.619332ms 898.490508ms 900.890818ms 903.280887ms 903.802452ms 907.965605ms 908.473955ms 909.425622ms 913.787182ms 913.8832ms 915.235172ms 915.572329ms 920.086121ms 922.194905ms 925.856995ms 926.767975ms 927.34048ms 928.160696ms 928.43501ms 928.519232ms 928.526236ms 933.917627ms 935.752157ms 936.685042ms 937.655877ms 937.803702ms 938.514788ms 938.807713ms 939.052048ms 939.448577ms 939.5275ms 940.027703ms 943.877877ms 945.288176ms 945.62102ms 949.637173ms 951.206316ms 951.433543ms 952.581067ms 955.447032ms 957.413474ms 960.611299ms 965.33057ms 970.326159ms 971.126578ms 978.812514ms 981.8687ms 993.403346ms 995.477733ms 1.002612368s 1.00564885s 1.01680566s 1.024406015s 1.032920379s 1.040995553s 1.046691344s 1.055498549s 1.073406473s 1.077063102s 1.082961867s 1.083647259s 2.45286989s 2.48324712s 2.537637428s 2.541587374s 2.564049773s 2.568465852s 2.579534361s 2.596742783s 2.597516981s 2.608193445s 2.622043527s 2.660723718s 2.663975222s 2.681567135s 2.711692827s] Apr 28 11:02:06.463: INFO: 50 %ile: 867.496249ms Apr 28 11:02:06.463: INFO: 90 %ile: 1.055498549s Apr 28 11:02:06.463: INFO: 99 %ile: 2.681567135s Apr 28 11:02:06.463: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:02:06.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-2fxfn" for this suite. Apr 28 11:02:28.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:02:28.553: INFO: namespace: e2e-tests-svc-latency-2fxfn, resource: bindings, ignored listing per whitelist Apr 28 11:02:28.575: INFO: namespace e2e-tests-svc-latency-2fxfn deletion completed in 22.107622204s • [SLOW TEST:39.460 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:02:28.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rzgwc STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 11:02:28.652: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 28 11:02:54.813: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.235:8080/dial?request=hostName&protocol=udp&host=10.244.2.110&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-rzgwc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 11:02:54.813: INFO: >>> kubeConfig: /root/.kube/config I0428 11:02:54.841388 6 log.go:172] (0xc0000eae70) (0xc0020921e0) Create stream I0428 11:02:54.841433 6 log.go:172] (0xc0000eae70) (0xc0020921e0) Stream added, broadcasting: 1 I0428 11:02:54.843676 6 log.go:172] (0xc0000eae70) Reply frame received for 1 I0428 11:02:54.843721 6 log.go:172] (0xc0000eae70) (0xc001c400a0) Create stream I0428 11:02:54.843733 6 log.go:172] (0xc0000eae70) (0xc001c400a0) Stream added, broadcasting: 3 I0428 11:02:54.844767 6 log.go:172] (0xc0000eae70) Reply frame received for 3 I0428 11:02:54.844804 6 log.go:172] (0xc0000eae70) (0xc002092280) Create stream I0428 11:02:54.844815 6 log.go:172] (0xc0000eae70) (0xc002092280) Stream added, broadcasting: 5 I0428 11:02:54.846002 6 log.go:172] (0xc0000eae70) Reply frame received for 5 I0428 11:02:54.934563 6 log.go:172] (0xc0000eae70) Data frame received for 3 I0428 11:02:54.934593 6 log.go:172] (0xc001c400a0) (3) Data frame handling I0428 11:02:54.934622 6 log.go:172] (0xc001c400a0) (3) Data frame sent I0428 11:02:54.935433 6 log.go:172] (0xc0000eae70) Data frame received for 3 I0428 11:02:54.935462 6 log.go:172] (0xc001c400a0) (3) Data frame handling I0428 11:02:54.935492 6 log.go:172] (0xc0000eae70) Data frame received for 5 I0428 11:02:54.935516 6 log.go:172] (0xc002092280) (5) Data frame handling I0428 11:02:54.937640 6 log.go:172] (0xc0000eae70) Data frame received for 1 I0428 11:02:54.937665 6 log.go:172] (0xc0020921e0) (1) Data frame handling I0428 11:02:54.937688 6 log.go:172] (0xc0020921e0) (1) Data frame sent I0428 11:02:54.937707 6 log.go:172] (0xc0000eae70) (0xc0020921e0) Stream removed, broadcasting: 1 I0428 11:02:54.937726 6 log.go:172] (0xc0000eae70) Go away received I0428 11:02:54.937860 6 log.go:172] (0xc0000eae70) (0xc0020921e0) Stream removed, broadcasting: 1 I0428 11:02:54.937892 6 log.go:172] (0xc0000eae70) (0xc001c400a0) Stream removed, broadcasting: 3 I0428 11:02:54.937912 6 log.go:172] (0xc0000eae70) (0xc002092280) Stream removed, broadcasting: 5 Apr 28 11:02:54.937: INFO: Waiting for endpoints: map[] Apr 28 11:02:54.941: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.235:8080/dial?request=hostName&protocol=udp&host=10.244.1.234&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-rzgwc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 11:02:54.941: INFO: >>> kubeConfig: /root/.kube/config I0428 11:02:54.968844 6 log.go:172] (0xc0000eb340) (0xc0020926e0) Create stream I0428 11:02:54.968868 6 log.go:172] (0xc0000eb340) (0xc0020926e0) Stream added, broadcasting: 1 I0428 11:02:54.970600 6 log.go:172] (0xc0000eb340) Reply frame received for 1 I0428 11:02:54.970629 6 log.go:172] (0xc0000eb340) (0xc001d74aa0) Create stream I0428 11:02:54.970639 6 log.go:172] (0xc0000eb340) (0xc001d74aa0) Stream added, broadcasting: 3 I0428 11:02:54.971355 6 log.go:172] (0xc0000eb340) Reply frame received for 3 I0428 11:02:54.971377 6 log.go:172] (0xc0000eb340) (0xc001c40140) Create stream I0428 11:02:54.971384 6 log.go:172] (0xc0000eb340) (0xc001c40140) Stream added, broadcasting: 5 I0428 11:02:54.971984 6 log.go:172] (0xc0000eb340) Reply frame received for 5 I0428 11:02:55.023927 6 log.go:172] (0xc0000eb340) Data frame received for 3 I0428 11:02:55.023954 6 log.go:172] (0xc001d74aa0) (3) Data frame handling I0428 11:02:55.023971 6 log.go:172] (0xc001d74aa0) (3) Data frame sent I0428 11:02:55.024906 6 log.go:172] (0xc0000eb340) Data frame received for 5 I0428 11:02:55.024971 6 log.go:172] (0xc001c40140) (5) Data frame handling I0428 11:02:55.025007 6 log.go:172] (0xc0000eb340) Data frame received for 3 I0428 11:02:55.025022 6 log.go:172] (0xc001d74aa0) (3) Data frame handling I0428 11:02:55.026937 6 log.go:172] (0xc0000eb340) Data frame received for 1 I0428 11:02:55.026980 6 log.go:172] (0xc0020926e0) (1) Data frame handling I0428 11:02:55.027012 6 log.go:172] (0xc0020926e0) (1) Data frame sent I0428 11:02:55.027032 6 log.go:172] (0xc0000eb340) (0xc0020926e0) Stream removed, broadcasting: 1 I0428 11:02:55.027096 6 log.go:172] (0xc0000eb340) Go away received I0428 11:02:55.027195 6 log.go:172] (0xc0000eb340) (0xc0020926e0) Stream removed, broadcasting: 1 I0428 11:02:55.027232 6 log.go:172] (0xc0000eb340) (0xc001d74aa0) Stream removed, broadcasting: 3 I0428 11:02:55.027250 6 log.go:172] (0xc0000eb340) (0xc001c40140) Stream removed, broadcasting: 5 Apr 28 11:02:55.027: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:02:55.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-rzgwc" for this suite. Apr 28 11:03:17.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:03:17.111: INFO: namespace: e2e-tests-pod-network-test-rzgwc, resource: bindings, ignored listing per whitelist Apr 28 11:03:17.117: INFO: namespace e2e-tests-pod-network-test-rzgwc deletion completed in 22.086667662s • [SLOW TEST:48.541 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:03:17.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 28 11:03:25.372: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 11:03:25.380: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 11:03:27.380: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 11:03:27.384: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 11:03:29.380: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 11:03:29.384: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 11:03:31.380: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 11:03:31.384: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 11:03:33.380: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 11:03:33.384: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:03:33.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7p9dq" for this suite. Apr 28 11:03:55.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:03:55.446: INFO: namespace: e2e-tests-container-lifecycle-hook-7p9dq, resource: bindings, ignored listing per whitelist Apr 28 11:03:55.521: INFO: namespace e2e-tests-container-lifecycle-hook-7p9dq deletion completed in 22.13350329s • [SLOW TEST:38.404 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:03:55.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 28 11:04:00.742: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:04:01.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-j72jv" for this suite. Apr 28 11:04:23.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:04:23.946: INFO: namespace: e2e-tests-replicaset-j72jv, resource: bindings, ignored listing per whitelist Apr 28 11:04:23.967: INFO: namespace e2e-tests-replicaset-j72jv deletion completed in 22.202264394s • [SLOW TEST:28.446 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:04:23.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nkzfc [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-nkzfc STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-nkzfc STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-nkzfc STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-nkzfc Apr 28 11:04:28.209: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nkzfc, name: ss-0, uid: 06033d73-8940-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Apr 28 11:04:31.246: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nkzfc, name: ss-0, uid: 06033d73-8940-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Apr 28 11:04:31.275: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nkzfc, name: ss-0, uid: 06033d73-8940-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Apr 28 11:04:31.286: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-nkzfc STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-nkzfc STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-nkzfc and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Apr 28 11:04:41.377: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nkzfc Apr 28 11:04:41.380: INFO: Scaling statefulset ss to 0 Apr 28 11:04:51.400: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 11:04:51.403: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:04:51.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nkzfc" for this suite. Apr 28 11:04:57.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:04:57.444: INFO: namespace: e2e-tests-statefulset-nkzfc, resource: bindings, ignored listing per whitelist Apr 28 11:04:57.496: INFO: namespace e2e-tests-statefulset-nkzfc deletion completed in 6.077664802s • [SLOW TEST:33.529 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:04:57.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-2z4hj STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2z4hj to expose endpoints map[] Apr 28 11:04:57.648: INFO: Get endpoints failed (13.342172ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 28 11:04:58.652: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2z4hj exposes endpoints map[] (1.017382601s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-2z4hj STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2z4hj to expose endpoints map[pod1:[80]] Apr 28 11:05:02.731: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2z4hj exposes endpoints map[pod1:[80]] (4.071767759s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-2z4hj STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2z4hj to expose endpoints map[pod1:[80] pod2:[80]] Apr 28 11:05:05.802: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2z4hj exposes endpoints map[pod1:[80] pod2:[80]] (3.067253026s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-2z4hj STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2z4hj to expose endpoints map[pod2:[80]] Apr 28 11:05:06.864: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2z4hj exposes endpoints map[pod2:[80]] (1.056216628s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-2z4hj STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2z4hj to expose endpoints map[] Apr 28 11:05:07.910: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2z4hj exposes endpoints map[] (1.042591468s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:05:07.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-2z4hj" for this suite. Apr 28 11:05:13.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:05:13.979: INFO: namespace: e2e-tests-services-2z4hj, resource: bindings, ignored listing per whitelist Apr 28 11:05:14.017: INFO: namespace e2e-tests-services-2z4hj deletion completed in 6.076737704s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:16.520 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:05:14.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:05:14.117: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22af181a-8940-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-ndhh6" to be "success or failure" Apr 28 11:05:14.121: INFO: Pod "downwardapi-volume-22af181a-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.511128ms Apr 28 11:05:16.124: INFO: Pod "downwardapi-volume-22af181a-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007007234s Apr 28 11:05:18.128: INFO: Pod "downwardapi-volume-22af181a-8940-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011279685s STEP: Saw pod success Apr 28 11:05:18.128: INFO: Pod "downwardapi-volume-22af181a-8940-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:05:18.132: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-22af181a-8940-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:05:18.147: INFO: Waiting for pod downwardapi-volume-22af181a-8940-11ea-80e8-0242ac11000f to disappear Apr 28 11:05:18.151: INFO: Pod downwardapi-volume-22af181a-8940-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:05:18.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ndhh6" for this suite. Apr 28 11:05:24.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:05:24.212: INFO: namespace: e2e-tests-downward-api-ndhh6, resource: bindings, ignored listing per whitelist Apr 28 11:05:24.257: INFO: namespace e2e-tests-downward-api-ndhh6 deletion completed in 6.10270129s • [SLOW TEST:10.240 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:05:24.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 11:05:24.362: INFO: Creating deployment "test-recreate-deployment" Apr 28 11:05:24.379: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 28 11:05:24.388: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Apr 28 11:05:26.395: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 28 11:05:26.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723668724, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723668724, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723668724, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723668724, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 11:05:28.403: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 28 11:05:28.411: INFO: Updating deployment test-recreate-deployment Apr 28 11:05:28.411: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Apr 28 11:05:28.647: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-fvq2k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fvq2k/deployments/test-recreate-deployment,UID:28ccad6a-8940-11ea-99e8-0242ac110002,ResourceVersion:7635484,Generation:2,CreationTimestamp:2020-04-28 11:05:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-28 11:05:28 +0000 UTC 2020-04-28 11:05:28 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-28 11:05:28 +0000 UTC 2020-04-28 11:05:24 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 28 11:05:28.650: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-fvq2k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fvq2k/replicasets/test-recreate-deployment-589c4bfd,UID:2b464d72-8940-11ea-99e8-0242ac110002,ResourceVersion:7635481,Generation:1,CreationTimestamp:2020-04-28 11:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 28ccad6a-8940-11ea-99e8-0242ac110002 0xc00201992f 0xc002019940}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 11:05:28.650: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 28 11:05:28.651: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-fvq2k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fvq2k/replicasets/test-recreate-deployment-5bf7f65dc,UID:28d0440c-8940-11ea-99e8-0242ac110002,ResourceVersion:7635472,Generation:2,CreationTimestamp:2020-04-28 11:05:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 28ccad6a-8940-11ea-99e8-0242ac110002 0xc002019a00 0xc002019a01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 11:05:28.653: INFO: Pod "test-recreate-deployment-589c4bfd-9ztmt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-9ztmt,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-fvq2k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fvq2k/pods/test-recreate-deployment-589c4bfd-9ztmt,UID:2b486156-8940-11ea-99e8-0242ac110002,ResourceVersion:7635485,Generation:0,CreationTimestamp:2020-04-28 11:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 2b464d72-8940-11ea-99e8-0242ac110002 0xc001358b6f 0xc001358b80}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wwxrt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwxrt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwxrt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001358c90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001358cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:05:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:05:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-04-28 11:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:05:28.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-fvq2k" for this suite. Apr 28 11:05:34.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:05:34.740: INFO: namespace: e2e-tests-deployment-fvq2k, resource: bindings, ignored listing per whitelist Apr 28 11:05:34.778: INFO: namespace e2e-tests-deployment-fvq2k deletion completed in 6.120762937s • [SLOW TEST:10.521 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:05:34.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 11:05:34.890: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:05:39.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-v4k9g" for this suite. Apr 28 11:06:17.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:06:17.146: INFO: namespace: e2e-tests-pods-v4k9g, resource: bindings, ignored listing per whitelist Apr 28 11:06:17.165: INFO: namespace e2e-tests-pods-v4k9g deletion completed in 38.097251175s • [SLOW TEST:42.387 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:06:17.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 11:06:17.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-8rz7k' Apr 28 11:06:17.380: INFO: stderr: "" Apr 28 11:06:17.380: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 28 11:06:22.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-8rz7k -o json' Apr 28 11:06:22.537: INFO: stderr: "" Apr 28 11:06:22.537: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-28T11:06:17Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-8rz7k\",\n \"resourceVersion\": \"7635644\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-8rz7k/pods/e2e-test-nginx-pod\",\n \"uid\": \"4864c16b-8940-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-m49dw\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-m49dw\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-m49dw\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T11:06:17Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T11:06:19Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T11:06:19Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T11:06:17Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://485a2b0334fe183f23c1131776490b379958db2a60ada27361b0f6e63badeaf2\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-28T11:06:19Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.242\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-28T11:06:17Z\"\n }\n}\n" STEP: replace the image in the pod Apr 28 11:06:22.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-8rz7k' Apr 28 11:06:22.787: INFO: stderr: "" Apr 28 11:06:22.787: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Apr 28 11:06:22.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-8rz7k' Apr 28 11:06:31.266: INFO: stderr: "" Apr 28 11:06:31.266: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:06:31.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8rz7k" for this suite. Apr 28 11:06:37.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:06:37.304: INFO: namespace: e2e-tests-kubectl-8rz7k, resource: bindings, ignored listing per whitelist Apr 28 11:06:37.362: INFO: namespace e2e-tests-kubectl-8rz7k deletion completed in 6.092826966s • [SLOW TEST:20.197 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:06:37.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-9xqr STEP: Creating a pod to test atomic-volume-subpath Apr 28 11:06:37.469: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9xqr" in namespace "e2e-tests-subpath-r5mx6" to be "success or failure" Apr 28 11:06:37.482: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Pending", Reason="", readiness=false. Elapsed: 13.216622ms Apr 28 11:06:39.487: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017548141s Apr 28 11:06:41.490: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021300849s Apr 28 11:06:43.509: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Running", Reason="", readiness=false. Elapsed: 6.040263764s Apr 28 11:06:45.514: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Running", Reason="", readiness=false. Elapsed: 8.044652376s Apr 28 11:06:47.517: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Running", Reason="", readiness=false. Elapsed: 10.048380812s Apr 28 11:06:49.522: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Running", Reason="", readiness=false. Elapsed: 12.052790182s Apr 28 11:06:51.526: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Running", Reason="", readiness=false. Elapsed: 14.057224403s Apr 28 11:06:53.531: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Running", Reason="", readiness=false. Elapsed: 16.062143896s Apr 28 11:06:55.535: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Running", Reason="", readiness=false. Elapsed: 18.066317147s Apr 28 11:06:57.538: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Running", Reason="", readiness=false. Elapsed: 20.069436395s Apr 28 11:06:59.543: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Running", Reason="", readiness=false. Elapsed: 22.073553925s Apr 28 11:07:01.547: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Running", Reason="", readiness=false. Elapsed: 24.07796523s Apr 28 11:07:03.551: INFO: Pod "pod-subpath-test-downwardapi-9xqr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.0824171s STEP: Saw pod success Apr 28 11:07:03.551: INFO: Pod "pod-subpath-test-downwardapi-9xqr" satisfied condition "success or failure" Apr 28 11:07:03.555: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-9xqr container test-container-subpath-downwardapi-9xqr: STEP: delete the pod Apr 28 11:07:03.586: INFO: Waiting for pod pod-subpath-test-downwardapi-9xqr to disappear Apr 28 11:07:03.602: INFO: Pod pod-subpath-test-downwardapi-9xqr no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-9xqr Apr 28 11:07:03.602: INFO: Deleting pod "pod-subpath-test-downwardapi-9xqr" in namespace "e2e-tests-subpath-r5mx6" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:07:03.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-r5mx6" for this suite. Apr 28 11:07:09.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:07:09.629: INFO: namespace: e2e-tests-subpath-r5mx6, resource: bindings, ignored listing per whitelist Apr 28 11:07:09.693: INFO: namespace e2e-tests-subpath-r5mx6 deletion completed in 6.085298243s • [SLOW TEST:32.330 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:07:09.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-znxdk STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 11:07:09.870: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 28 11:07:38.002: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.243 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-znxdk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 11:07:38.002: INFO: >>> kubeConfig: /root/.kube/config I0428 11:07:38.036622 6 log.go:172] (0xc000c6e4d0) (0xc001bc46e0) Create stream I0428 11:07:38.036657 6 log.go:172] (0xc000c6e4d0) (0xc001bc46e0) Stream added, broadcasting: 1 I0428 11:07:38.039229 6 log.go:172] (0xc000c6e4d0) Reply frame received for 1 I0428 11:07:38.039268 6 log.go:172] (0xc000c6e4d0) (0xc001f7c000) Create stream I0428 11:07:38.039282 6 log.go:172] (0xc000c6e4d0) (0xc001f7c000) Stream added, broadcasting: 3 I0428 11:07:38.040384 6 log.go:172] (0xc000c6e4d0) Reply frame received for 3 I0428 11:07:38.040442 6 log.go:172] (0xc000c6e4d0) (0xc001f1c6e0) Create stream I0428 11:07:38.040459 6 log.go:172] (0xc000c6e4d0) (0xc001f1c6e0) Stream added, broadcasting: 5 I0428 11:07:38.041797 6 log.go:172] (0xc000c6e4d0) Reply frame received for 5 I0428 11:07:39.108215 6 log.go:172] (0xc000c6e4d0) Data frame received for 5 I0428 11:07:39.108273 6 log.go:172] (0xc001f1c6e0) (5) Data frame handling I0428 11:07:39.108318 6 log.go:172] (0xc000c6e4d0) Data frame received for 3 I0428 11:07:39.108344 6 log.go:172] (0xc001f7c000) (3) Data frame handling I0428 11:07:39.108372 6 log.go:172] (0xc001f7c000) (3) Data frame sent I0428 11:07:39.108393 6 log.go:172] (0xc000c6e4d0) Data frame received for 3 I0428 11:07:39.108409 6 log.go:172] (0xc001f7c000) (3) Data frame handling I0428 11:07:39.110743 6 log.go:172] (0xc000c6e4d0) Data frame received for 1 I0428 11:07:39.110773 6 log.go:172] (0xc001bc46e0) (1) Data frame handling I0428 11:07:39.110793 6 log.go:172] (0xc001bc46e0) (1) Data frame sent I0428 11:07:39.110863 6 log.go:172] (0xc000c6e4d0) (0xc001bc46e0) Stream removed, broadcasting: 1 I0428 11:07:39.110952 6 log.go:172] (0xc000c6e4d0) Go away received I0428 11:07:39.111050 6 log.go:172] (0xc000c6e4d0) (0xc001bc46e0) Stream removed, broadcasting: 1 I0428 11:07:39.111080 6 log.go:172] (0xc000c6e4d0) (0xc001f7c000) Stream removed, broadcasting: 3 I0428 11:07:39.111092 6 log.go:172] (0xc000c6e4d0) (0xc001f1c6e0) Stream removed, broadcasting: 5 Apr 28 11:07:39.111: INFO: Found all expected endpoints: [netserver-0] Apr 28 11:07:39.114: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.118 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-znxdk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 11:07:39.114: INFO: >>> kubeConfig: /root/.kube/config I0428 11:07:39.146416 6 log.go:172] (0xc000c6ea50) (0xc001bc4a00) Create stream I0428 11:07:39.146454 6 log.go:172] (0xc000c6ea50) (0xc001bc4a00) Stream added, broadcasting: 1 I0428 11:07:39.148874 6 log.go:172] (0xc000c6ea50) Reply frame received for 1 I0428 11:07:39.148913 6 log.go:172] (0xc000c6ea50) (0xc001f1c780) Create stream I0428 11:07:39.148924 6 log.go:172] (0xc000c6ea50) (0xc001f1c780) Stream added, broadcasting: 3 I0428 11:07:39.149835 6 log.go:172] (0xc000c6ea50) Reply frame received for 3 I0428 11:07:39.149891 6 log.go:172] (0xc000c6ea50) (0xc001f1c820) Create stream I0428 11:07:39.149907 6 log.go:172] (0xc000c6ea50) (0xc001f1c820) Stream added, broadcasting: 5 I0428 11:07:39.150592 6 log.go:172] (0xc000c6ea50) Reply frame received for 5 I0428 11:07:40.231352 6 log.go:172] (0xc000c6ea50) Data frame received for 3 I0428 11:07:40.231405 6 log.go:172] (0xc001f1c780) (3) Data frame handling I0428 11:07:40.231440 6 log.go:172] (0xc001f1c780) (3) Data frame sent I0428 11:07:40.231485 6 log.go:172] (0xc000c6ea50) Data frame received for 3 I0428 11:07:40.231516 6 log.go:172] (0xc001f1c780) (3) Data frame handling I0428 11:07:40.232016 6 log.go:172] (0xc000c6ea50) Data frame received for 5 I0428 11:07:40.232052 6 log.go:172] (0xc001f1c820) (5) Data frame handling I0428 11:07:40.233537 6 log.go:172] (0xc000c6ea50) Data frame received for 1 I0428 11:07:40.233608 6 log.go:172] (0xc001bc4a00) (1) Data frame handling I0428 11:07:40.233640 6 log.go:172] (0xc001bc4a00) (1) Data frame sent I0428 11:07:40.233675 6 log.go:172] (0xc000c6ea50) (0xc001bc4a00) Stream removed, broadcasting: 1 I0428 11:07:40.233839 6 log.go:172] (0xc000c6ea50) (0xc001bc4a00) Stream removed, broadcasting: 1 I0428 11:07:40.233866 6 log.go:172] (0xc000c6ea50) (0xc001f1c780) Stream removed, broadcasting: 3 I0428 11:07:40.233878 6 log.go:172] (0xc000c6ea50) (0xc001f1c820) Stream removed, broadcasting: 5 Apr 28 11:07:40.233: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:07:40.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0428 11:07:40.233997 6 log.go:172] (0xc000c6ea50) Go away received STEP: Destroying namespace "e2e-tests-pod-network-test-znxdk" for this suite. Apr 28 11:08:02.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:08:02.267: INFO: namespace: e2e-tests-pod-network-test-znxdk, resource: bindings, ignored listing per whitelist Apr 28 11:08:02.333: INFO: namespace e2e-tests-pod-network-test-znxdk deletion completed in 22.094576503s • [SLOW TEST:52.639 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:08:02.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Apr 28 11:08:03.016: INFO: created pod pod-service-account-defaultsa Apr 28 11:08:03.016: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 28 11:08:03.024: INFO: created pod pod-service-account-mountsa Apr 28 11:08:03.024: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 28 11:08:03.050: INFO: created pod pod-service-account-nomountsa Apr 28 11:08:03.050: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 28 11:08:03.066: INFO: created pod pod-service-account-defaultsa-mountspec Apr 28 11:08:03.066: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 28 11:08:03.092: INFO: created pod pod-service-account-mountsa-mountspec Apr 28 11:08:03.092: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 28 11:08:03.159: INFO: created pod pod-service-account-nomountsa-mountspec Apr 28 11:08:03.159: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 28 11:08:03.189: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 28 11:08:03.189: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 28 11:08:03.219: INFO: created pod pod-service-account-mountsa-nomountspec Apr 28 11:08:03.219: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 28 11:08:03.268: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 28 11:08:03.268: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:08:03.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-9wvkp" for this suite. Apr 28 11:08:33.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:08:33.419: INFO: namespace: e2e-tests-svcaccounts-9wvkp, resource: bindings, ignored listing per whitelist Apr 28 11:08:33.424: INFO: namespace e2e-tests-svcaccounts-9wvkp deletion completed in 30.120089782s • [SLOW TEST:31.091 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:08:33.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 28 11:08:33.547: INFO: Waiting up to 5m0s for pod "pod-998c77d9-8940-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-5mrrq" to be "success or failure" Apr 28 11:08:33.557: INFO: Pod "pod-998c77d9-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.272547ms Apr 28 11:08:35.682: INFO: Pod "pod-998c77d9-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135226278s Apr 28 11:08:37.687: INFO: Pod "pod-998c77d9-8940-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139774376s STEP: Saw pod success Apr 28 11:08:37.687: INFO: Pod "pod-998c77d9-8940-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:08:37.694: INFO: Trying to get logs from node hunter-worker2 pod pod-998c77d9-8940-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:08:37.758: INFO: Waiting for pod pod-998c77d9-8940-11ea-80e8-0242ac11000f to disappear Apr 28 11:08:37.763: INFO: Pod pod-998c77d9-8940-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:08:37.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5mrrq" for this suite. Apr 28 11:08:43.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:08:43.844: INFO: namespace: e2e-tests-emptydir-5mrrq, resource: bindings, ignored listing per whitelist Apr 28 11:08:43.859: INFO: namespace e2e-tests-emptydir-5mrrq deletion completed in 6.093154184s • [SLOW TEST:10.435 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:08:43.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-9fc65102-8940-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 11:08:43.993: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9fc887bf-8940-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-2zdwg" to be "success or failure" Apr 28 11:08:44.009: INFO: Pod "pod-projected-secrets-9fc887bf-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.541081ms Apr 28 11:08:46.013: INFO: Pod "pod-projected-secrets-9fc887bf-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020443404s Apr 28 11:08:48.018: INFO: Pod "pod-projected-secrets-9fc887bf-8940-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025007187s STEP: Saw pod success Apr 28 11:08:48.018: INFO: Pod "pod-projected-secrets-9fc887bf-8940-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:08:48.020: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-9fc887bf-8940-11ea-80e8-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Apr 28 11:08:48.094: INFO: Waiting for pod pod-projected-secrets-9fc887bf-8940-11ea-80e8-0242ac11000f to disappear Apr 28 11:08:48.099: INFO: Pod pod-projected-secrets-9fc887bf-8940-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:08:48.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2zdwg" for this suite. Apr 28 11:08:54.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:08:54.202: INFO: namespace: e2e-tests-projected-2zdwg, resource: bindings, ignored listing per whitelist Apr 28 11:08:54.216: INFO: namespace e2e-tests-projected-2zdwg deletion completed in 6.113343698s • [SLOW TEST:10.356 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:08:54.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:08:54.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5eec365-8940-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-hk9sb" to be "success or failure" Apr 28 11:08:54.325: INFO: Pod "downwardapi-volume-a5eec365-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.051505ms Apr 28 11:08:56.328: INFO: Pod "downwardapi-volume-a5eec365-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020696434s Apr 28 11:08:58.333: INFO: Pod "downwardapi-volume-a5eec365-8940-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025323421s STEP: Saw pod success Apr 28 11:08:58.333: INFO: Pod "downwardapi-volume-a5eec365-8940-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:08:58.337: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a5eec365-8940-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:08:58.375: INFO: Waiting for pod downwardapi-volume-a5eec365-8940-11ea-80e8-0242ac11000f to disappear Apr 28 11:08:58.390: INFO: Pod downwardapi-volume-a5eec365-8940-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:08:58.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hk9sb" for this suite. Apr 28 11:09:04.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:09:04.441: INFO: namespace: e2e-tests-projected-hk9sb, resource: bindings, ignored listing per whitelist Apr 28 11:09:04.502: INFO: namespace e2e-tests-projected-hk9sb deletion completed in 6.108385616s • [SLOW TEST:10.286 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:09:04.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 28 11:09:04.614: INFO: Waiting up to 5m0s for pod "pod-ac11aba9-8940-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-xbcc9" to be "success or failure" Apr 28 11:09:04.636: INFO: Pod "pod-ac11aba9-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.211578ms Apr 28 11:09:06.640: INFO: Pod "pod-ac11aba9-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025427582s Apr 28 11:09:08.644: INFO: Pod "pod-ac11aba9-8940-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029714984s STEP: Saw pod success Apr 28 11:09:08.644: INFO: Pod "pod-ac11aba9-8940-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:09:08.647: INFO: Trying to get logs from node hunter-worker pod pod-ac11aba9-8940-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:09:08.669: INFO: Waiting for pod pod-ac11aba9-8940-11ea-80e8-0242ac11000f to disappear Apr 28 11:09:08.674: INFO: Pod pod-ac11aba9-8940-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:09:08.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xbcc9" for this suite. Apr 28 11:09:14.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:09:14.813: INFO: namespace: e2e-tests-emptydir-xbcc9, resource: bindings, ignored listing per whitelist Apr 28 11:09:14.818: INFO: namespace e2e-tests-emptydir-xbcc9 deletion completed in 6.14031401s • [SLOW TEST:10.316 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:09:14.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Apr 28 11:09:14.938: INFO: Waiting up to 5m0s for pod "client-containers-b2362c8d-8940-11ea-80e8-0242ac11000f" in namespace "e2e-tests-containers-4gnxz" to be "success or failure" Apr 28 11:09:14.943: INFO: Pod "client-containers-b2362c8d-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.951833ms Apr 28 11:09:16.946: INFO: Pod "client-containers-b2362c8d-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008134738s Apr 28 11:09:18.951: INFO: Pod "client-containers-b2362c8d-8940-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012849937s STEP: Saw pod success Apr 28 11:09:18.951: INFO: Pod "client-containers-b2362c8d-8940-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:09:18.955: INFO: Trying to get logs from node hunter-worker2 pod client-containers-b2362c8d-8940-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:09:18.977: INFO: Waiting for pod client-containers-b2362c8d-8940-11ea-80e8-0242ac11000f to disappear Apr 28 11:09:18.979: INFO: Pod client-containers-b2362c8d-8940-11ea-80e8-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:09:18.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-4gnxz" for this suite. Apr 28 11:09:25.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:09:25.071: INFO: namespace: e2e-tests-containers-4gnxz, resource: bindings, ignored listing per whitelist Apr 28 11:09:25.073: INFO: namespace e2e-tests-containers-4gnxz deletion completed in 6.0915132s • [SLOW TEST:10.255 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:09:25.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 11:09:25.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-snz5k' Apr 28 11:09:25.328: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 28 11:09:25.328: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Apr 28 11:09:29.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-snz5k' Apr 28 11:09:29.527: INFO: stderr: "" Apr 28 11:09:29.527: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:09:29.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-snz5k" for this suite. Apr 28 11:09:51.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:09:51.590: INFO: namespace: e2e-tests-kubectl-snz5k, resource: bindings, ignored listing per whitelist Apr 28 11:09:51.634: INFO: namespace e2e-tests-kubectl-snz5k deletion completed in 22.10263209s • [SLOW TEST:26.560 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:09:51.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Apr 28 11:09:51.726: INFO: namespace e2e-tests-kubectl-bjp8c Apr 28 11:09:51.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bjp8c' Apr 28 11:09:54.413: INFO: stderr: "" Apr 28 11:09:54.413: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 28 11:09:55.418: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:09:55.418: INFO: Found 0 / 1 Apr 28 11:09:56.418: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:09:56.418: INFO: Found 0 / 1 Apr 28 11:09:57.417: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:09:57.417: INFO: Found 0 / 1 Apr 28 11:09:58.418: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:09:58.418: INFO: Found 1 / 1 Apr 28 11:09:58.418: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 28 11:09:58.421: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:09:58.421: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 28 11:09:58.421: INFO: wait on redis-master startup in e2e-tests-kubectl-bjp8c Apr 28 11:09:58.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ccnsq redis-master --namespace=e2e-tests-kubectl-bjp8c' Apr 28 11:09:58.538: INFO: stderr: "" Apr 28 11:09:58.538: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Apr 11:09:56.999 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Apr 11:09:56.999 # Server started, Redis version 3.2.12\n1:M 28 Apr 11:09:56.999 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Apr 11:09:56.999 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 28 11:09:58.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-bjp8c' Apr 28 11:09:58.691: INFO: stderr: "" Apr 28 11:09:58.691: INFO: stdout: "service/rm2 exposed\n" Apr 28 11:09:58.699: INFO: Service rm2 in namespace e2e-tests-kubectl-bjp8c found. STEP: exposing service Apr 28 11:10:00.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-bjp8c' Apr 28 11:10:00.851: INFO: stderr: "" Apr 28 11:10:00.851: INFO: stdout: "service/rm3 exposed\n" Apr 28 11:10:00.874: INFO: Service rm3 in namespace e2e-tests-kubectl-bjp8c found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:10:02.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bjp8c" for this suite. Apr 28 11:10:24.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:10:24.929: INFO: namespace: e2e-tests-kubectl-bjp8c, resource: bindings, ignored listing per whitelist Apr 28 11:10:24.980: INFO: namespace e2e-tests-kubectl-bjp8c deletion completed in 22.095243289s • [SLOW TEST:33.346 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:10:24.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Apr 28 11:10:25.142: INFO: Waiting up to 5m0s for pod "client-containers-dc13068e-8940-11ea-80e8-0242ac11000f" in namespace "e2e-tests-containers-qkvbp" to be "success or failure" Apr 28 11:10:25.145: INFO: Pod "client-containers-dc13068e-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.578845ms Apr 28 11:10:27.149: INFO: Pod "client-containers-dc13068e-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007063645s Apr 28 11:10:29.154: INFO: Pod "client-containers-dc13068e-8940-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011483115s STEP: Saw pod success Apr 28 11:10:29.154: INFO: Pod "client-containers-dc13068e-8940-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:10:29.157: INFO: Trying to get logs from node hunter-worker pod client-containers-dc13068e-8940-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:10:29.175: INFO: Waiting for pod client-containers-dc13068e-8940-11ea-80e8-0242ac11000f to disappear Apr 28 11:10:29.179: INFO: Pod client-containers-dc13068e-8940-11ea-80e8-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:10:29.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-qkvbp" for this suite. Apr 28 11:10:35.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:10:35.279: INFO: namespace: e2e-tests-containers-qkvbp, resource: bindings, ignored listing per whitelist Apr 28 11:10:35.287: INFO: namespace e2e-tests-containers-qkvbp deletion completed in 6.104110829s • [SLOW TEST:10.306 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:10:35.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-e23297ef-8940-11ea-80e8-0242ac11000f STEP: Creating configMap with name cm-test-opt-upd-e2329840-8940-11ea-80e8-0242ac11000f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e23297ef-8940-11ea-80e8-0242ac11000f STEP: Updating configmap cm-test-opt-upd-e2329840-8940-11ea-80e8-0242ac11000f STEP: Creating configMap with name cm-test-opt-create-e2329868-8940-11ea-80e8-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:10:43.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nghtb" for this suite. Apr 28 11:11:05.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:11:05.642: INFO: namespace: e2e-tests-configmap-nghtb, resource: bindings, ignored listing per whitelist Apr 28 11:11:05.670: INFO: namespace e2e-tests-configmap-nghtb deletion completed in 22.101285976s • [SLOW TEST:30.383 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:11:05.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-f44a1284-8940-11ea-80e8-0242ac11000f STEP: Creating secret with name secret-projected-all-test-volume-f44a123b-8940-11ea-80e8-0242ac11000f STEP: Creating a pod to test Check all projections for projected volume plugin Apr 28 11:11:05.814: INFO: Waiting up to 5m0s for pod "projected-volume-f44a11aa-8940-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-j8ck5" to be "success or failure" Apr 28 11:11:05.821: INFO: Pod "projected-volume-f44a11aa-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.046484ms Apr 28 11:11:07.825: INFO: Pod "projected-volume-f44a11aa-8940-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010854209s Apr 28 11:11:09.829: INFO: Pod "projected-volume-f44a11aa-8940-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014944417s STEP: Saw pod success Apr 28 11:11:09.829: INFO: Pod "projected-volume-f44a11aa-8940-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:11:09.832: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-f44a11aa-8940-11ea-80e8-0242ac11000f container projected-all-volume-test: STEP: delete the pod Apr 28 11:11:09.850: INFO: Waiting for pod projected-volume-f44a11aa-8940-11ea-80e8-0242ac11000f to disappear Apr 28 11:11:09.854: INFO: Pod projected-volume-f44a11aa-8940-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:11:09.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j8ck5" for this suite. Apr 28 11:11:15.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:11:16.012: INFO: namespace: e2e-tests-projected-j8ck5, resource: bindings, ignored listing per whitelist Apr 28 11:11:16.016: INFO: namespace e2e-tests-projected-j8ck5 deletion completed in 6.158616553s • [SLOW TEST:10.346 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:11:16.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Apr 28 11:11:16.108: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:11:22.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-kg9sm" for this suite. Apr 28 11:11:29.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:11:29.052: INFO: namespace: e2e-tests-init-container-kg9sm, resource: bindings, ignored listing per whitelist Apr 28 11:11:29.109: INFO: namespace e2e-tests-init-container-kg9sm deletion completed in 6.100463616s • [SLOW TEST:13.093 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:11:29.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-2hswr STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 11:11:29.223: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 28 11:11:49.364: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.4:8080/dial?request=hostName&protocol=http&host=10.244.2.129&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-2hswr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 11:11:49.364: INFO: >>> kubeConfig: /root/.kube/config I0428 11:11:49.403467 6 log.go:172] (0xc001d162c0) (0xc0018f37c0) Create stream I0428 11:11:49.403503 6 log.go:172] (0xc001d162c0) (0xc0018f37c0) Stream added, broadcasting: 1 I0428 11:11:49.405619 6 log.go:172] (0xc001d162c0) Reply frame received for 1 I0428 11:11:49.405665 6 log.go:172] (0xc001d162c0) (0xc0018f3860) Create stream I0428 11:11:49.405694 6 log.go:172] (0xc001d162c0) (0xc0018f3860) Stream added, broadcasting: 3 I0428 11:11:49.406559 6 log.go:172] (0xc001d162c0) Reply frame received for 3 I0428 11:11:49.406592 6 log.go:172] (0xc001d162c0) (0xc001e0cbe0) Create stream I0428 11:11:49.406604 6 log.go:172] (0xc001d162c0) (0xc001e0cbe0) Stream added, broadcasting: 5 I0428 11:11:49.407496 6 log.go:172] (0xc001d162c0) Reply frame received for 5 I0428 11:11:49.505667 6 log.go:172] (0xc001d162c0) Data frame received for 3 I0428 11:11:49.505691 6 log.go:172] (0xc0018f3860) (3) Data frame handling I0428 11:11:49.505704 6 log.go:172] (0xc0018f3860) (3) Data frame sent I0428 11:11:49.506310 6 log.go:172] (0xc001d162c0) Data frame received for 3 I0428 11:11:49.506328 6 log.go:172] (0xc0018f3860) (3) Data frame handling I0428 11:11:49.506386 6 log.go:172] (0xc001d162c0) Data frame received for 5 I0428 11:11:49.506430 6 log.go:172] (0xc001e0cbe0) (5) Data frame handling I0428 11:11:49.508333 6 log.go:172] (0xc001d162c0) Data frame received for 1 I0428 11:11:49.508367 6 log.go:172] (0xc0018f37c0) (1) Data frame handling I0428 11:11:49.508397 6 log.go:172] (0xc0018f37c0) (1) Data frame sent I0428 11:11:49.508418 6 log.go:172] (0xc001d162c0) (0xc0018f37c0) Stream removed, broadcasting: 1 I0428 11:11:49.508443 6 log.go:172] (0xc001d162c0) Go away received I0428 11:11:49.508580 6 log.go:172] (0xc001d162c0) (0xc0018f37c0) Stream removed, broadcasting: 1 I0428 11:11:49.508603 6 log.go:172] (0xc001d162c0) (0xc0018f3860) Stream removed, broadcasting: 3 I0428 11:11:49.508616 6 log.go:172] (0xc001d162c0) (0xc001e0cbe0) Stream removed, broadcasting: 5 Apr 28 11:11:49.508: INFO: Waiting for endpoints: map[] Apr 28 11:11:49.511: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.4:8080/dial?request=hostName&protocol=http&host=10.244.1.3&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-2hswr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 11:11:49.511: INFO: >>> kubeConfig: /root/.kube/config I0428 11:11:49.544388 6 log.go:172] (0xc0020482c0) (0xc001cbc0a0) Create stream I0428 11:11:49.544411 6 log.go:172] (0xc0020482c0) (0xc001cbc0a0) Stream added, broadcasting: 1 I0428 11:11:49.548985 6 log.go:172] (0xc0020482c0) Reply frame received for 1 I0428 11:11:49.549042 6 log.go:172] (0xc0020482c0) (0xc001217860) Create stream I0428 11:11:49.549059 6 log.go:172] (0xc0020482c0) (0xc001217860) Stream added, broadcasting: 3 I0428 11:11:49.551971 6 log.go:172] (0xc0020482c0) Reply frame received for 3 I0428 11:11:49.552036 6 log.go:172] (0xc0020482c0) (0xc000a345a0) Create stream I0428 11:11:49.552071 6 log.go:172] (0xc0020482c0) (0xc000a345a0) Stream added, broadcasting: 5 I0428 11:11:49.553588 6 log.go:172] (0xc0020482c0) Reply frame received for 5 I0428 11:11:49.625066 6 log.go:172] (0xc0020482c0) Data frame received for 3 I0428 11:11:49.625357 6 log.go:172] (0xc001217860) (3) Data frame handling I0428 11:11:49.625387 6 log.go:172] (0xc001217860) (3) Data frame sent I0428 11:11:49.625402 6 log.go:172] (0xc0020482c0) Data frame received for 3 I0428 11:11:49.625414 6 log.go:172] (0xc001217860) (3) Data frame handling I0428 11:11:49.625570 6 log.go:172] (0xc0020482c0) Data frame received for 5 I0428 11:11:49.625601 6 log.go:172] (0xc000a345a0) (5) Data frame handling I0428 11:11:49.627096 6 log.go:172] (0xc0020482c0) Data frame received for 1 I0428 11:11:49.627168 6 log.go:172] (0xc001cbc0a0) (1) Data frame handling I0428 11:11:49.627204 6 log.go:172] (0xc001cbc0a0) (1) Data frame sent I0428 11:11:49.627232 6 log.go:172] (0xc0020482c0) (0xc001cbc0a0) Stream removed, broadcasting: 1 I0428 11:11:49.627277 6 log.go:172] (0xc0020482c0) Go away received I0428 11:11:49.627435 6 log.go:172] (0xc0020482c0) (0xc001cbc0a0) Stream removed, broadcasting: 1 I0428 11:11:49.627467 6 log.go:172] (0xc0020482c0) (0xc001217860) Stream removed, broadcasting: 3 I0428 11:11:49.627485 6 log.go:172] (0xc0020482c0) (0xc000a345a0) Stream removed, broadcasting: 5 Apr 28 11:11:49.627: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:11:49.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-2hswr" for this suite. Apr 28 11:12:13.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:12:13.694: INFO: namespace: e2e-tests-pod-network-test-2hswr, resource: bindings, ignored listing per whitelist Apr 28 11:12:13.738: INFO: namespace e2e-tests-pod-network-test-2hswr deletion completed in 24.107286781s • [SLOW TEST:44.629 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:12:13.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 28 11:12:13.835: INFO: Waiting up to 5m0s for pod "pod-1cdc6577-8941-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-2jngc" to be "success or failure" Apr 28 11:12:13.848: INFO: Pod "pod-1cdc6577-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.829518ms Apr 28 11:12:15.857: INFO: Pod "pod-1cdc6577-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021623409s Apr 28 11:12:17.861: INFO: Pod "pod-1cdc6577-8941-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025906805s STEP: Saw pod success Apr 28 11:12:17.861: INFO: Pod "pod-1cdc6577-8941-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:12:17.864: INFO: Trying to get logs from node hunter-worker pod pod-1cdc6577-8941-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:12:17.900: INFO: Waiting for pod pod-1cdc6577-8941-11ea-80e8-0242ac11000f to disappear Apr 28 11:12:17.938: INFO: Pod pod-1cdc6577-8941-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:12:17.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2jngc" for this suite. Apr 28 11:12:23.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:12:23.983: INFO: namespace: e2e-tests-emptydir-2jngc, resource: bindings, ignored listing per whitelist Apr 28 11:12:24.031: INFO: namespace e2e-tests-emptydir-2jngc deletion completed in 6.088570021s • [SLOW TEST:10.292 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:12:24.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 28 11:12:24.116: INFO: Waiting up to 5m0s for pod "pod-22fcdb0d-8941-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-fjtr9" to be "success or failure" Apr 28 11:12:24.160: INFO: Pod "pod-22fcdb0d-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 44.43788ms Apr 28 11:12:26.164: INFO: Pod "pod-22fcdb0d-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048133825s Apr 28 11:12:28.167: INFO: Pod "pod-22fcdb0d-8941-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051451883s STEP: Saw pod success Apr 28 11:12:28.167: INFO: Pod "pod-22fcdb0d-8941-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:12:28.170: INFO: Trying to get logs from node hunter-worker2 pod pod-22fcdb0d-8941-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:12:28.246: INFO: Waiting for pod pod-22fcdb0d-8941-11ea-80e8-0242ac11000f to disappear Apr 28 11:12:28.259: INFO: Pod pod-22fcdb0d-8941-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:12:28.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fjtr9" for this suite. Apr 28 11:12:34.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:12:34.303: INFO: namespace: e2e-tests-emptydir-fjtr9, resource: bindings, ignored listing per whitelist Apr 28 11:12:34.353: INFO: namespace e2e-tests-emptydir-fjtr9 deletion completed in 6.090212311s • [SLOW TEST:10.322 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:12:34.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Apr 28 11:12:34.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kx7bz' Apr 28 11:12:34.697: INFO: stderr: "" Apr 28 11:12:34.697: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 28 11:12:35.748: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:12:35.748: INFO: Found 0 / 1 Apr 28 11:12:36.701: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:12:36.702: INFO: Found 0 / 1 Apr 28 11:12:37.702: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:12:37.702: INFO: Found 0 / 1 Apr 28 11:12:38.702: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:12:38.702: INFO: Found 1 / 1 Apr 28 11:12:38.702: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 28 11:12:38.706: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:12:38.706: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 28 11:12:38.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-mhskk --namespace=e2e-tests-kubectl-kx7bz -p {"metadata":{"annotations":{"x":"y"}}}' Apr 28 11:12:38.814: INFO: stderr: "" Apr 28 11:12:38.814: INFO: stdout: "pod/redis-master-mhskk patched\n" STEP: checking annotations Apr 28 11:12:38.834: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:12:38.834: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:12:38.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kx7bz" for this suite. Apr 28 11:13:00.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:13:00.917: INFO: namespace: e2e-tests-kubectl-kx7bz, resource: bindings, ignored listing per whitelist Apr 28 11:13:00.955: INFO: namespace e2e-tests-kubectl-kx7bz deletion completed in 22.116810283s • [SLOW TEST:26.602 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:13:00.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-3906747a-8941-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 11:13:01.094: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3907225a-8941-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-99qdn" to be "success or failure" Apr 28 11:13:01.111: INFO: Pod "pod-projected-secrets-3907225a-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.850831ms Apr 28 11:13:03.143: INFO: Pod "pod-projected-secrets-3907225a-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049147292s Apr 28 11:13:05.161: INFO: Pod "pod-projected-secrets-3907225a-8941-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067191109s STEP: Saw pod success Apr 28 11:13:05.161: INFO: Pod "pod-projected-secrets-3907225a-8941-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:13:05.164: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-3907225a-8941-11ea-80e8-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Apr 28 11:13:05.183: INFO: Waiting for pod pod-projected-secrets-3907225a-8941-11ea-80e8-0242ac11000f to disappear Apr 28 11:13:05.188: INFO: Pod pod-projected-secrets-3907225a-8941-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:13:05.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-99qdn" for this suite. Apr 28 11:13:11.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:13:11.258: INFO: namespace: e2e-tests-projected-99qdn, resource: bindings, ignored listing per whitelist Apr 28 11:13:11.279: INFO: namespace e2e-tests-projected-99qdn deletion completed in 6.088054478s • [SLOW TEST:10.324 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:13:11.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Apr 28 11:13:11.400: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:13:18.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-l8mpq" for this suite. Apr 28 11:13:40.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:13:40.989: INFO: namespace: e2e-tests-init-container-l8mpq, resource: bindings, ignored listing per whitelist Apr 28 11:13:41.056: INFO: namespace e2e-tests-init-container-l8mpq deletion completed in 22.097640158s • [SLOW TEST:29.777 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:13:41.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0428 11:14:11.695186 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 11:14:11.695: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:14:11.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-vtjwf" for this suite. Apr 28 11:14:17.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:14:17.760: INFO: namespace: e2e-tests-gc-vtjwf, resource: bindings, ignored listing per whitelist Apr 28 11:14:17.780: INFO: namespace e2e-tests-gc-vtjwf deletion completed in 6.081938253s • [SLOW TEST:36.724 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:14:17.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 28 11:14:18.110: INFO: Waiting up to 5m0s for pod "pod-66db6cee-8941-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-8z8qj" to be "success or failure" Apr 28 11:14:18.124: INFO: Pod "pod-66db6cee-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.180902ms Apr 28 11:14:20.128: INFO: Pod "pod-66db6cee-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01817154s Apr 28 11:14:22.132: INFO: Pod "pod-66db6cee-8941-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021844534s STEP: Saw pod success Apr 28 11:14:22.132: INFO: Pod "pod-66db6cee-8941-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:14:22.135: INFO: Trying to get logs from node hunter-worker2 pod pod-66db6cee-8941-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:14:22.175: INFO: Waiting for pod pod-66db6cee-8941-11ea-80e8-0242ac11000f to disappear Apr 28 11:14:22.189: INFO: Pod pod-66db6cee-8941-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:14:22.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8z8qj" for this suite. Apr 28 11:14:28.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:14:28.254: INFO: namespace: e2e-tests-emptydir-8z8qj, resource: bindings, ignored listing per whitelist Apr 28 11:14:28.276: INFO: namespace e2e-tests-emptydir-8z8qj deletion completed in 6.083892664s • [SLOW TEST:10.495 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:14:28.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-6d0d817e-8941-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 11:14:28.388: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6d0f3fa4-8941-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-bpccs" to be "success or failure" Apr 28 11:14:28.420: INFO: Pod "pod-projected-secrets-6d0f3fa4-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.424949ms Apr 28 11:14:30.424: INFO: Pod "pod-projected-secrets-6d0f3fa4-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035754482s Apr 28 11:14:32.428: INFO: Pod "pod-projected-secrets-6d0f3fa4-8941-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040102195s STEP: Saw pod success Apr 28 11:14:32.429: INFO: Pod "pod-projected-secrets-6d0f3fa4-8941-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:14:32.432: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-6d0f3fa4-8941-11ea-80e8-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Apr 28 11:14:32.477: INFO: Waiting for pod pod-projected-secrets-6d0f3fa4-8941-11ea-80e8-0242ac11000f to disappear Apr 28 11:14:32.489: INFO: Pod pod-projected-secrets-6d0f3fa4-8941-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:14:32.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bpccs" for this suite. Apr 28 11:14:38.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:14:38.519: INFO: namespace: e2e-tests-projected-bpccs, resource: bindings, ignored listing per whitelist Apr 28 11:14:38.576: INFO: namespace e2e-tests-projected-bpccs deletion completed in 6.083353371s • [SLOW TEST:10.300 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:14:38.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:14:38.690: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73337d25-8941-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-qb4wk" to be "success or failure" Apr 28 11:14:38.699: INFO: Pod "downwardapi-volume-73337d25-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316946ms Apr 28 11:14:40.708: INFO: Pod "downwardapi-volume-73337d25-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017404095s Apr 28 11:14:42.712: INFO: Pod "downwardapi-volume-73337d25-8941-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021610391s STEP: Saw pod success Apr 28 11:14:42.712: INFO: Pod "downwardapi-volume-73337d25-8941-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:14:42.715: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-73337d25-8941-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:14:42.736: INFO: Waiting for pod downwardapi-volume-73337d25-8941-11ea-80e8-0242ac11000f to disappear Apr 28 11:14:42.741: INFO: Pod downwardapi-volume-73337d25-8941-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:14:42.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qb4wk" for this suite. Apr 28 11:14:48.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:14:48.777: INFO: namespace: e2e-tests-downward-api-qb4wk, resource: bindings, ignored listing per whitelist Apr 28 11:14:48.836: INFO: namespace e2e-tests-downward-api-qb4wk deletion completed in 6.091947187s • [SLOW TEST:10.260 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:14:48.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 11:14:48.964: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 28 11:14:48.970: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:14:48.972: INFO: Number of nodes with available pods: 0 Apr 28 11:14:48.972: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:14:49.977: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:14:49.985: INFO: Number of nodes with available pods: 0 Apr 28 11:14:49.985: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:14:50.976: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:14:50.979: INFO: Number of nodes with available pods: 0 Apr 28 11:14:50.979: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:14:51.977: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:14:51.983: INFO: Number of nodes with available pods: 0 Apr 28 11:14:51.983: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:14:52.977: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:14:52.981: INFO: Number of nodes with available pods: 2 Apr 28 11:14:52.981: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 28 11:14:53.010: INFO: Wrong image for pod: daemon-set-bvp2r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:14:53.010: INFO: Wrong image for pod: daemon-set-gwtlh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:14:53.031: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:14:54.036: INFO: Wrong image for pod: daemon-set-bvp2r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:14:54.036: INFO: Wrong image for pod: daemon-set-gwtlh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:14:54.040: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:14:55.036: INFO: Wrong image for pod: daemon-set-bvp2r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:14:55.036: INFO: Wrong image for pod: daemon-set-gwtlh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:14:55.040: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:14:56.036: INFO: Wrong image for pod: daemon-set-bvp2r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:14:56.036: INFO: Wrong image for pod: daemon-set-gwtlh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:14:56.036: INFO: Pod daemon-set-gwtlh is not available Apr 28 11:14:56.041: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:14:57.035: INFO: Wrong image for pod: daemon-set-bvp2r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:14:57.036: INFO: Pod daemon-set-p4t4v is not available Apr 28 11:14:57.039: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:14:58.035: INFO: Wrong image for pod: daemon-set-bvp2r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:14:58.035: INFO: Pod daemon-set-p4t4v is not available Apr 28 11:14:58.039: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:14:59.036: INFO: Wrong image for pod: daemon-set-bvp2r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:14:59.036: INFO: Pod daemon-set-p4t4v is not available Apr 28 11:14:59.039: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:15:00.035: INFO: Wrong image for pod: daemon-set-bvp2r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:15:00.040: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:15:01.036: INFO: Wrong image for pod: daemon-set-bvp2r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:15:01.040: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:15:02.036: INFO: Wrong image for pod: daemon-set-bvp2r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 11:15:02.036: INFO: Pod daemon-set-bvp2r is not available Apr 28 11:15:02.040: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:15:03.037: INFO: Pod daemon-set-qf9df is not available Apr 28 11:15:03.039: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 28 11:15:03.042: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:15:03.044: INFO: Number of nodes with available pods: 1 Apr 28 11:15:03.044: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:15:04.050: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:15:04.053: INFO: Number of nodes with available pods: 1 Apr 28 11:15:04.053: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:15:05.050: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:15:05.053: INFO: Number of nodes with available pods: 1 Apr 28 11:15:05.053: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:15:06.050: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:15:06.053: INFO: Number of nodes with available pods: 2 Apr 28 11:15:06.053: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-cw5pv, will wait for the garbage collector to delete the pods Apr 28 11:15:06.127: INFO: Deleting DaemonSet.extensions daemon-set took: 6.376301ms Apr 28 11:15:06.227: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.229008ms Apr 28 11:15:11.331: INFO: Number of nodes with available pods: 0 Apr 28 11:15:11.331: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 11:15:11.334: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-cw5pv/daemonsets","resourceVersion":"7637659"},"items":null} Apr 28 11:15:11.336: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-cw5pv/pods","resourceVersion":"7637659"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:15:11.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-cw5pv" for this suite. Apr 28 11:15:17.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:15:17.409: INFO: namespace: e2e-tests-daemonsets-cw5pv, resource: bindings, ignored listing per whitelist Apr 28 11:15:17.472: INFO: namespace e2e-tests-daemonsets-cw5pv deletion completed in 6.124310638s • [SLOW TEST:28.635 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:15:17.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 28 11:15:17.563: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pbh6j,SelfLink:/api/v1/namespaces/e2e-tests-watch-pbh6j/configmaps/e2e-watch-test-watch-closed,UID:8a5c69b0-8941-11ea-99e8-0242ac110002,ResourceVersion:7637707,Generation:0,CreationTimestamp:2020-04-28 11:15:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 28 11:15:17.564: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pbh6j,SelfLink:/api/v1/namespaces/e2e-tests-watch-pbh6j/configmaps/e2e-watch-test-watch-closed,UID:8a5c69b0-8941-11ea-99e8-0242ac110002,ResourceVersion:7637708,Generation:0,CreationTimestamp:2020-04-28 11:15:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 28 11:15:17.606: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pbh6j,SelfLink:/api/v1/namespaces/e2e-tests-watch-pbh6j/configmaps/e2e-watch-test-watch-closed,UID:8a5c69b0-8941-11ea-99e8-0242ac110002,ResourceVersion:7637709,Generation:0,CreationTimestamp:2020-04-28 11:15:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 28 11:15:17.606: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pbh6j,SelfLink:/api/v1/namespaces/e2e-tests-watch-pbh6j/configmaps/e2e-watch-test-watch-closed,UID:8a5c69b0-8941-11ea-99e8-0242ac110002,ResourceVersion:7637710,Generation:0,CreationTimestamp:2020-04-28 11:15:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:15:17.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-pbh6j" for this suite. Apr 28 11:15:23.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:15:23.652: INFO: namespace: e2e-tests-watch-pbh6j, resource: bindings, ignored listing per whitelist Apr 28 11:15:23.713: INFO: namespace e2e-tests-watch-pbh6j deletion completed in 6.102777924s • [SLOW TEST:6.241 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:15:23.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:15:23.819: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e13b233-8941-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-c9cfw" to be "success or failure" Apr 28 11:15:23.826: INFO: Pod "downwardapi-volume-8e13b233-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.81548ms Apr 28 11:15:25.831: INFO: Pod "downwardapi-volume-8e13b233-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012037933s Apr 28 11:15:27.836: INFO: Pod "downwardapi-volume-8e13b233-8941-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016877692s STEP: Saw pod success Apr 28 11:15:27.836: INFO: Pod "downwardapi-volume-8e13b233-8941-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:15:27.839: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8e13b233-8941-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:15:27.857: INFO: Waiting for pod downwardapi-volume-8e13b233-8941-11ea-80e8-0242ac11000f to disappear Apr 28 11:15:27.862: INFO: Pod downwardapi-volume-8e13b233-8941-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:15:27.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c9cfw" for this suite. Apr 28 11:15:33.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:15:33.942: INFO: namespace: e2e-tests-downward-api-c9cfw, resource: bindings, ignored listing per whitelist Apr 28 11:15:33.972: INFO: namespace e2e-tests-downward-api-c9cfw deletion completed in 6.106948405s • [SLOW TEST:10.258 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:15:33.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Apr 28 11:15:34.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:34.321: INFO: stderr: "" Apr 28 11:15:34.321: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 11:15:34.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:34.465: INFO: stderr: "" Apr 28 11:15:34.465: INFO: stdout: "update-demo-nautilus-pzgfv update-demo-nautilus-vmg76 " Apr 28 11:15:34.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzgfv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:34.581: INFO: stderr: "" Apr 28 11:15:34.581: INFO: stdout: "" Apr 28 11:15:34.581: INFO: update-demo-nautilus-pzgfv is created but not running Apr 28 11:15:39.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:39.693: INFO: stderr: "" Apr 28 11:15:39.693: INFO: stdout: "update-demo-nautilus-pzgfv update-demo-nautilus-vmg76 " Apr 28 11:15:39.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzgfv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:39.804: INFO: stderr: "" Apr 28 11:15:39.805: INFO: stdout: "true" Apr 28 11:15:39.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzgfv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:39.915: INFO: stderr: "" Apr 28 11:15:39.915: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 11:15:39.915: INFO: validating pod update-demo-nautilus-pzgfv Apr 28 11:15:39.920: INFO: got data: { "image": "nautilus.jpg" } Apr 28 11:15:39.920: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 11:15:39.920: INFO: update-demo-nautilus-pzgfv is verified up and running Apr 28 11:15:39.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmg76 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:40.023: INFO: stderr: "" Apr 28 11:15:40.023: INFO: stdout: "true" Apr 28 11:15:40.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmg76 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:40.123: INFO: stderr: "" Apr 28 11:15:40.123: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 11:15:40.123: INFO: validating pod update-demo-nautilus-vmg76 Apr 28 11:15:40.127: INFO: got data: { "image": "nautilus.jpg" } Apr 28 11:15:40.127: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 11:15:40.127: INFO: update-demo-nautilus-vmg76 is verified up and running STEP: scaling down the replication controller Apr 28 11:15:40.129: INFO: scanned /root for discovery docs: Apr 28 11:15:40.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:41.274: INFO: stderr: "" Apr 28 11:15:41.274: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 11:15:41.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:41.380: INFO: stderr: "" Apr 28 11:15:41.380: INFO: stdout: "update-demo-nautilus-pzgfv update-demo-nautilus-vmg76 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 28 11:15:46.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:46.490: INFO: stderr: "" Apr 28 11:15:46.490: INFO: stdout: "update-demo-nautilus-pzgfv update-demo-nautilus-vmg76 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 28 11:15:51.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:51.611: INFO: stderr: "" Apr 28 11:15:51.611: INFO: stdout: "update-demo-nautilus-vmg76 " Apr 28 11:15:51.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmg76 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:51.711: INFO: stderr: "" Apr 28 11:15:51.711: INFO: stdout: "true" Apr 28 11:15:51.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmg76 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:51.829: INFO: stderr: "" Apr 28 11:15:51.829: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 11:15:51.829: INFO: validating pod update-demo-nautilus-vmg76 Apr 28 11:15:51.832: INFO: got data: { "image": "nautilus.jpg" } Apr 28 11:15:51.833: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 11:15:51.833: INFO: update-demo-nautilus-vmg76 is verified up and running STEP: scaling up the replication controller Apr 28 11:15:51.835: INFO: scanned /root for discovery docs: Apr 28 11:15:51.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:52.968: INFO: stderr: "" Apr 28 11:15:52.968: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 11:15:52.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:53.075: INFO: stderr: "" Apr 28 11:15:53.075: INFO: stdout: "update-demo-nautilus-56nbl update-demo-nautilus-vmg76 " Apr 28 11:15:53.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-56nbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:53.168: INFO: stderr: "" Apr 28 11:15:53.168: INFO: stdout: "" Apr 28 11:15:53.168: INFO: update-demo-nautilus-56nbl is created but not running Apr 28 11:15:58.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:58.280: INFO: stderr: "" Apr 28 11:15:58.280: INFO: stdout: "update-demo-nautilus-56nbl update-demo-nautilus-vmg76 " Apr 28 11:15:58.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-56nbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:58.386: INFO: stderr: "" Apr 28 11:15:58.386: INFO: stdout: "true" Apr 28 11:15:58.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-56nbl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:58.481: INFO: stderr: "" Apr 28 11:15:58.481: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 11:15:58.481: INFO: validating pod update-demo-nautilus-56nbl Apr 28 11:15:58.485: INFO: got data: { "image": "nautilus.jpg" } Apr 28 11:15:58.485: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 11:15:58.485: INFO: update-demo-nautilus-56nbl is verified up and running Apr 28 11:15:58.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmg76 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:58.591: INFO: stderr: "" Apr 28 11:15:58.591: INFO: stdout: "true" Apr 28 11:15:58.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmg76 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:58.694: INFO: stderr: "" Apr 28 11:15:58.694: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 11:15:58.694: INFO: validating pod update-demo-nautilus-vmg76 Apr 28 11:15:58.698: INFO: got data: { "image": "nautilus.jpg" } Apr 28 11:15:58.698: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 11:15:58.698: INFO: update-demo-nautilus-vmg76 is verified up and running STEP: using delete to clean up resources Apr 28 11:15:58.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:58.812: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 11:15:58.812: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 28 11:15:58.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-fdsvl' Apr 28 11:15:58.917: INFO: stderr: "No resources found.\n" Apr 28 11:15:58.917: INFO: stdout: "" Apr 28 11:15:58.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-fdsvl -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 11:15:59.026: INFO: stderr: "" Apr 28 11:15:59.026: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:15:59.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fdsvl" for this suite. Apr 28 11:16:21.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:16:21.247: INFO: namespace: e2e-tests-kubectl-fdsvl, resource: bindings, ignored listing per whitelist Apr 28 11:16:21.281: INFO: namespace e2e-tests-kubectl-fdsvl deletion completed in 22.25130361s • [SLOW TEST:47.309 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:16:21.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Apr 28 11:16:21.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nczr5' Apr 28 11:16:21.637: INFO: stderr: "" Apr 28 11:16:21.637: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 11:16:21.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nczr5' Apr 28 11:16:21.784: INFO: stderr: "" Apr 28 11:16:21.784: INFO: stdout: "update-demo-nautilus-jxc9n update-demo-nautilus-l25qf " Apr 28 11:16:21.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxc9n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nczr5' Apr 28 11:16:21.883: INFO: stderr: "" Apr 28 11:16:21.883: INFO: stdout: "" Apr 28 11:16:21.883: INFO: update-demo-nautilus-jxc9n is created but not running Apr 28 11:16:26.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nczr5' Apr 28 11:16:26.984: INFO: stderr: "" Apr 28 11:16:26.984: INFO: stdout: "update-demo-nautilus-jxc9n update-demo-nautilus-l25qf " Apr 28 11:16:26.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxc9n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nczr5' Apr 28 11:16:27.086: INFO: stderr: "" Apr 28 11:16:27.086: INFO: stdout: "true" Apr 28 11:16:27.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxc9n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nczr5' Apr 28 11:16:27.196: INFO: stderr: "" Apr 28 11:16:27.197: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 11:16:27.197: INFO: validating pod update-demo-nautilus-jxc9n Apr 28 11:16:27.201: INFO: got data: { "image": "nautilus.jpg" } Apr 28 11:16:27.201: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 11:16:27.201: INFO: update-demo-nautilus-jxc9n is verified up and running Apr 28 11:16:27.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l25qf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nczr5' Apr 28 11:16:27.307: INFO: stderr: "" Apr 28 11:16:27.307: INFO: stdout: "true" Apr 28 11:16:27.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l25qf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nczr5' Apr 28 11:16:27.421: INFO: stderr: "" Apr 28 11:16:27.421: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 11:16:27.421: INFO: validating pod update-demo-nautilus-l25qf Apr 28 11:16:27.425: INFO: got data: { "image": "nautilus.jpg" } Apr 28 11:16:27.425: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 11:16:27.425: INFO: update-demo-nautilus-l25qf is verified up and running STEP: using delete to clean up resources Apr 28 11:16:27.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nczr5' Apr 28 11:16:27.524: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 11:16:27.525: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 28 11:16:27.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-nczr5' Apr 28 11:16:27.616: INFO: stderr: "No resources found.\n" Apr 28 11:16:27.616: INFO: stdout: "" Apr 28 11:16:27.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-nczr5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 11:16:27.869: INFO: stderr: "" Apr 28 11:16:27.870: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:16:27.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nczr5" for this suite. Apr 28 11:16:50.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:16:50.037: INFO: namespace: e2e-tests-kubectl-nczr5, resource: bindings, ignored listing per whitelist Apr 28 11:16:50.080: INFO: namespace e2e-tests-kubectl-nczr5 deletion completed in 22.136561413s • [SLOW TEST:28.798 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:16:50.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Apr 28 11:16:50.220: INFO: Waiting up to 5m0s for pod "downward-api-c1994798-8941-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-42p9z" to be "success or failure" Apr 28 11:16:50.229: INFO: Pod "downward-api-c1994798-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.474977ms Apr 28 11:16:52.233: INFO: Pod "downward-api-c1994798-8941-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012530572s Apr 28 11:16:54.237: INFO: Pod "downward-api-c1994798-8941-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016202573s STEP: Saw pod success Apr 28 11:16:54.237: INFO: Pod "downward-api-c1994798-8941-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:16:54.239: INFO: Trying to get logs from node hunter-worker pod downward-api-c1994798-8941-11ea-80e8-0242ac11000f container dapi-container: STEP: delete the pod Apr 28 11:16:54.260: INFO: Waiting for pod downward-api-c1994798-8941-11ea-80e8-0242ac11000f to disappear Apr 28 11:16:54.280: INFO: Pod downward-api-c1994798-8941-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:16:54.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-42p9z" for this suite. Apr 28 11:17:00.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:17:00.364: INFO: namespace: e2e-tests-downward-api-42p9z, resource: bindings, ignored listing per whitelist Apr 28 11:17:00.374: INFO: namespace e2e-tests-downward-api-42p9z deletion completed in 6.091110027s • [SLOW TEST:10.294 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:17:00.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0428 11:17:40.740651 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 11:17:40.740: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:17:40.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-kmktk" for this suite. Apr 28 11:17:48.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:17:48.785: INFO: namespace: e2e-tests-gc-kmktk, resource: bindings, ignored listing per whitelist Apr 28 11:17:48.823: INFO: namespace e2e-tests-gc-kmktk deletion completed in 8.079560728s • [SLOW TEST:48.449 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:17:48.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Apr 28 11:17:53.444: INFO: Pod pod-hostip-e4d82b63-8941-11ea-80e8-0242ac11000f has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:17:53.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vcb7m" for this suite. Apr 28 11:18:15.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:18:15.511: INFO: namespace: e2e-tests-pods-vcb7m, resource: bindings, ignored listing per whitelist Apr 28 11:18:15.551: INFO: namespace e2e-tests-pods-vcb7m deletion completed in 22.103255886s • [SLOW TEST:26.728 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:18:15.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:18:20.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-xmqp9" for this suite. Apr 28 11:18:42.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:18:42.794: INFO: namespace: e2e-tests-replication-controller-xmqp9, resource: bindings, ignored listing per whitelist Apr 28 11:18:42.838: INFO: namespace e2e-tests-replication-controller-xmqp9 deletion completed in 22.095052324s • [SLOW TEST:27.286 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:18:42.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:19:15.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-nfb4x" for this suite. Apr 28 11:19:21.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:19:21.458: INFO: namespace: e2e-tests-container-runtime-nfb4x, resource: bindings, ignored listing per whitelist Apr 28 11:19:21.512: INFO: namespace e2e-tests-container-runtime-nfb4x deletion completed in 6.087287754s • [SLOW TEST:38.674 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:19:21.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Apr 28 11:19:21.630: INFO: Waiting up to 5m0s for pod "client-containers-1bd70280-8942-11ea-80e8-0242ac11000f" in namespace "e2e-tests-containers-hw456" to be "success or failure" Apr 28 11:19:21.640: INFO: Pod "client-containers-1bd70280-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.525072ms Apr 28 11:19:23.643: INFO: Pod "client-containers-1bd70280-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012922916s Apr 28 11:19:25.648: INFO: Pod "client-containers-1bd70280-8942-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017225913s STEP: Saw pod success Apr 28 11:19:25.648: INFO: Pod "client-containers-1bd70280-8942-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:19:25.650: INFO: Trying to get logs from node hunter-worker pod client-containers-1bd70280-8942-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:19:25.677: INFO: Waiting for pod client-containers-1bd70280-8942-11ea-80e8-0242ac11000f to disappear Apr 28 11:19:25.687: INFO: Pod client-containers-1bd70280-8942-11ea-80e8-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:19:25.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-hw456" for this suite. Apr 28 11:19:31.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:19:31.750: INFO: namespace: e2e-tests-containers-hw456, resource: bindings, ignored listing per whitelist Apr 28 11:19:31.818: INFO: namespace e2e-tests-containers-hw456 deletion completed in 6.126726349s • [SLOW TEST:10.306 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:19:31.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 28 11:19:31.917: INFO: Waiting up to 5m0s for pod "pod-21f83bec-8942-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-tgsnp" to be "success or failure" Apr 28 11:19:31.921: INFO: Pod "pod-21f83bec-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.938828ms Apr 28 11:19:33.925: INFO: Pod "pod-21f83bec-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007838545s Apr 28 11:19:35.929: INFO: Pod "pod-21f83bec-8942-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011645411s STEP: Saw pod success Apr 28 11:19:35.929: INFO: Pod "pod-21f83bec-8942-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:19:35.932: INFO: Trying to get logs from node hunter-worker pod pod-21f83bec-8942-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:19:35.961: INFO: Waiting for pod pod-21f83bec-8942-11ea-80e8-0242ac11000f to disappear Apr 28 11:19:35.993: INFO: Pod pod-21f83bec-8942-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:19:35.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tgsnp" for this suite. Apr 28 11:19:42.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:19:42.047: INFO: namespace: e2e-tests-emptydir-tgsnp, resource: bindings, ignored listing per whitelist Apr 28 11:19:42.095: INFO: namespace e2e-tests-emptydir-tgsnp deletion completed in 6.0994326s • [SLOW TEST:10.278 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:19:42.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Apr 28 11:19:42.216: INFO: Waiting up to 5m0s for pod "pod-281cb454-8942-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-cx6x7" to be "success or failure" Apr 28 11:19:42.226: INFO: Pod "pod-281cb454-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.310154ms Apr 28 11:19:44.257: INFO: Pod "pod-281cb454-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041729766s Apr 28 11:19:46.263: INFO: Pod "pod-281cb454-8942-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0471795s STEP: Saw pod success Apr 28 11:19:46.263: INFO: Pod "pod-281cb454-8942-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:19:46.265: INFO: Trying to get logs from node hunter-worker pod pod-281cb454-8942-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:19:46.282: INFO: Waiting for pod pod-281cb454-8942-11ea-80e8-0242ac11000f to disappear Apr 28 11:19:46.299: INFO: Pod pod-281cb454-8942-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:19:46.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cx6x7" for this suite. Apr 28 11:19:52.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:19:52.375: INFO: namespace: e2e-tests-emptydir-cx6x7, resource: bindings, ignored listing per whitelist Apr 28 11:19:52.447: INFO: namespace e2e-tests-emptydir-cx6x7 deletion completed in 6.144240908s • [SLOW TEST:10.351 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:19:52.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-2e4c3cc3-8942-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 11:19:52.660: INFO: Waiting up to 5m0s for pod "pod-secrets-2e57182b-8942-11ea-80e8-0242ac11000f" in namespace "e2e-tests-secrets-5njjf" to be "success or failure" Apr 28 11:19:52.673: INFO: Pod "pod-secrets-2e57182b-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.318604ms Apr 28 11:19:54.676: INFO: Pod "pod-secrets-2e57182b-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016032295s Apr 28 11:19:56.680: INFO: Pod "pod-secrets-2e57182b-8942-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020071265s STEP: Saw pod success Apr 28 11:19:56.680: INFO: Pod "pod-secrets-2e57182b-8942-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:19:56.684: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-2e57182b-8942-11ea-80e8-0242ac11000f container secret-volume-test: STEP: delete the pod Apr 28 11:19:56.721: INFO: Waiting for pod pod-secrets-2e57182b-8942-11ea-80e8-0242ac11000f to disappear Apr 28 11:19:56.736: INFO: Pod pod-secrets-2e57182b-8942-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:19:56.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5njjf" for this suite. Apr 28 11:20:02.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:20:02.775: INFO: namespace: e2e-tests-secrets-5njjf, resource: bindings, ignored listing per whitelist Apr 28 11:20:02.822: INFO: namespace e2e-tests-secrets-5njjf deletion completed in 6.081918038s STEP: Destroying namespace "e2e-tests-secret-namespace-t5r2d" for this suite. Apr 28 11:20:08.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:20:08.895: INFO: namespace: e2e-tests-secret-namespace-t5r2d, resource: bindings, ignored listing per whitelist Apr 28 11:20:08.914: INFO: namespace e2e-tests-secret-namespace-t5r2d deletion completed in 6.0914464s • [SLOW TEST:16.466 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:20:08.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 28 11:20:09.037: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xwlfb,SelfLink:/api/v1/namespaces/e2e-tests-watch-xwlfb/configmaps/e2e-watch-test-label-changed,UID:38150192-8942-11ea-99e8-0242ac110002,ResourceVersion:7638880,Generation:0,CreationTimestamp:2020-04-28 11:20:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 28 11:20:09.037: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xwlfb,SelfLink:/api/v1/namespaces/e2e-tests-watch-xwlfb/configmaps/e2e-watch-test-label-changed,UID:38150192-8942-11ea-99e8-0242ac110002,ResourceVersion:7638881,Generation:0,CreationTimestamp:2020-04-28 11:20:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 28 11:20:09.038: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xwlfb,SelfLink:/api/v1/namespaces/e2e-tests-watch-xwlfb/configmaps/e2e-watch-test-label-changed,UID:38150192-8942-11ea-99e8-0242ac110002,ResourceVersion:7638882,Generation:0,CreationTimestamp:2020-04-28 11:20:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 28 11:20:19.080: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xwlfb,SelfLink:/api/v1/namespaces/e2e-tests-watch-xwlfb/configmaps/e2e-watch-test-label-changed,UID:38150192-8942-11ea-99e8-0242ac110002,ResourceVersion:7638903,Generation:0,CreationTimestamp:2020-04-28 11:20:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 28 11:20:19.080: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xwlfb,SelfLink:/api/v1/namespaces/e2e-tests-watch-xwlfb/configmaps/e2e-watch-test-label-changed,UID:38150192-8942-11ea-99e8-0242ac110002,ResourceVersion:7638904,Generation:0,CreationTimestamp:2020-04-28 11:20:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 28 11:20:19.080: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xwlfb,SelfLink:/api/v1/namespaces/e2e-tests-watch-xwlfb/configmaps/e2e-watch-test-label-changed,UID:38150192-8942-11ea-99e8-0242ac110002,ResourceVersion:7638905,Generation:0,CreationTimestamp:2020-04-28 11:20:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:20:19.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-xwlfb" for this suite. Apr 28 11:20:25.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:20:25.159: INFO: namespace: e2e-tests-watch-xwlfb, resource: bindings, ignored listing per whitelist Apr 28 11:20:25.169: INFO: namespace e2e-tests-watch-xwlfb deletion completed in 6.084517085s • [SLOW TEST:16.256 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:20:25.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Apr 28 11:20:25.267: INFO: Waiting up to 5m0s for pod "downward-api-41c5045e-8942-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-x248s" to be "success or failure" Apr 28 11:20:25.270: INFO: Pod "downward-api-41c5045e-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.614046ms Apr 28 11:20:27.274: INFO: Pod "downward-api-41c5045e-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007765962s Apr 28 11:20:29.279: INFO: Pod "downward-api-41c5045e-8942-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012155516s STEP: Saw pod success Apr 28 11:20:29.279: INFO: Pod "downward-api-41c5045e-8942-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:20:29.282: INFO: Trying to get logs from node hunter-worker pod downward-api-41c5045e-8942-11ea-80e8-0242ac11000f container dapi-container: STEP: delete the pod Apr 28 11:20:29.301: INFO: Waiting for pod downward-api-41c5045e-8942-11ea-80e8-0242ac11000f to disappear Apr 28 11:20:29.325: INFO: Pod downward-api-41c5045e-8942-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:20:29.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-x248s" for this suite. Apr 28 11:20:35.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:20:35.408: INFO: namespace: e2e-tests-downward-api-x248s, resource: bindings, ignored listing per whitelist Apr 28 11:20:35.456: INFO: namespace e2e-tests-downward-api-x248s deletion completed in 6.128011503s • [SLOW TEST:10.286 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:20:35.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:20:35.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-728v7" for this suite. Apr 28 11:20:41.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:20:41.692: INFO: namespace: e2e-tests-kubelet-test-728v7, resource: bindings, ignored listing per whitelist Apr 28 11:20:41.747: INFO: namespace e2e-tests-kubelet-test-728v7 deletion completed in 6.087755114s • [SLOW TEST:6.291 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:20:41.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:20:41.892: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bafba20-8942-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-xfkf8" to be "success or failure" Apr 28 11:20:41.922: INFO: Pod "downwardapi-volume-4bafba20-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.909126ms Apr 28 11:20:43.927: INFO: Pod "downwardapi-volume-4bafba20-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034194463s Apr 28 11:20:45.931: INFO: Pod "downwardapi-volume-4bafba20-8942-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038220917s STEP: Saw pod success Apr 28 11:20:45.931: INFO: Pod "downwardapi-volume-4bafba20-8942-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:20:45.933: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4bafba20-8942-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:20:45.971: INFO: Waiting for pod downwardapi-volume-4bafba20-8942-11ea-80e8-0242ac11000f to disappear Apr 28 11:20:45.982: INFO: Pod downwardapi-volume-4bafba20-8942-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:20:45.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xfkf8" for this suite. Apr 28 11:20:51.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:20:52.025: INFO: namespace: e2e-tests-projected-xfkf8, resource: bindings, ignored listing per whitelist Apr 28 11:20:52.077: INFO: namespace e2e-tests-projected-xfkf8 deletion completed in 6.092827625s • [SLOW TEST:10.331 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:20:52.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 11:20:52.222: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 28 11:20:57.227: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 28 11:20:57.227: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 28 11:20:59.231: INFO: Creating deployment "test-rollover-deployment" Apr 28 11:20:59.241: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 28 11:21:01.248: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 28 11:21:01.254: INFO: Ensure that both replica sets have 1 created replica Apr 28 11:21:01.258: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 28 11:21:01.264: INFO: Updating deployment test-rollover-deployment Apr 28 11:21:01.264: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 28 11:21:03.273: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 28 11:21:03.278: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 28 11:21:03.282: INFO: all replica sets need to contain the pod-template-hash label Apr 28 11:21:03.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669661, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 11:21:05.291: INFO: all replica sets need to contain the pod-template-hash label Apr 28 11:21:05.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669664, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 11:21:07.291: INFO: all replica sets need to contain the pod-template-hash label Apr 28 11:21:07.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669664, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 11:21:09.290: INFO: all replica sets need to contain the pod-template-hash label Apr 28 11:21:09.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669664, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 11:21:11.289: INFO: all replica sets need to contain the pod-template-hash label Apr 28 11:21:11.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669664, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 11:21:13.289: INFO: all replica sets need to contain the pod-template-hash label Apr 28 11:21:13.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669664, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723669659, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 11:21:15.289: INFO: Apr 28 11:21:15.289: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Apr 28 11:21:15.296: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-wdgck,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wdgck/deployments/test-rollover-deployment,UID:56066cc2-8942-11ea-99e8-0242ac110002,ResourceVersion:7639154,Generation:2,CreationTimestamp:2020-04-28 11:20:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-28 11:20:59 +0000 UTC 2020-04-28 11:20:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-28 11:21:14 +0000 UTC 2020-04-28 11:20:59 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 28 11:21:15.299: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-wdgck,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wdgck/replicasets/test-rollover-deployment-5b8479fdb6,UID:573c97bc-8942-11ea-99e8-0242ac110002,ResourceVersion:7639143,Generation:2,CreationTimestamp:2020-04-28 11:21:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 56066cc2-8942-11ea-99e8-0242ac110002 0xc001bb00d7 0xc001bb00d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 28 11:21:15.299: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 28 11:21:15.300: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-wdgck,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wdgck/replicasets/test-rollover-controller,UID:51d60b91-8942-11ea-99e8-0242ac110002,ResourceVersion:7639153,Generation:2,CreationTimestamp:2020-04-28 11:20:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 56066cc2-8942-11ea-99e8-0242ac110002 0xc001febe7f 0xc001febe90}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 11:21:15.300: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-wdgck,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wdgck/replicasets/test-rollover-deployment-58494b7559,UID:560a57b8-8942-11ea-99e8-0242ac110002,ResourceVersion:7639110,Generation:2,CreationTimestamp:2020-04-28 11:20:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 56066cc2-8942-11ea-99e8-0242ac110002 0xc001bb0007 0xc001bb0008}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 11:21:15.303: INFO: Pod "test-rollover-deployment-5b8479fdb6-r4wrp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-r4wrp,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-wdgck,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wdgck/pods/test-rollover-deployment-5b8479fdb6-r4wrp,UID:57486480-8942-11ea-99e8-0242ac110002,ResourceVersion:7639121,Generation:0,CreationTimestamp:2020-04-28 11:21:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 573c97bc-8942-11ea-99e8-0242ac110002 0xc001bb0c87 0xc001bb0c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8ktvk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8ktvk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-8ktvk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb0d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb0d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:21:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:21:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:21:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:21:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.151,StartTime:2020-04-28 11:21:01 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-28 11:21:04 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://b98d25801ab50f70224391dabe3e26a5e7092e03c9779f555b716dc2e75f41e1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:21:15.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-wdgck" for this suite. Apr 28 11:21:23.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:21:23.371: INFO: namespace: e2e-tests-deployment-wdgck, resource: bindings, ignored listing per whitelist Apr 28 11:21:23.392: INFO: namespace e2e-tests-deployment-wdgck deletion completed in 8.085204789s • [SLOW TEST:31.314 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:21:23.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 28 11:21:28.023: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6477e518-8942-11ea-80e8-0242ac11000f" Apr 28 11:21:28.023: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6477e518-8942-11ea-80e8-0242ac11000f" in namespace "e2e-tests-pods-b7h6k" to be "terminated due to deadline exceeded" Apr 28 11:21:28.046: INFO: Pod "pod-update-activedeadlineseconds-6477e518-8942-11ea-80e8-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 23.285697ms Apr 28 11:21:30.050: INFO: Pod "pod-update-activedeadlineseconds-6477e518-8942-11ea-80e8-0242ac11000f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.027641377s Apr 28 11:21:30.050: INFO: Pod "pod-update-activedeadlineseconds-6477e518-8942-11ea-80e8-0242ac11000f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:21:30.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-b7h6k" for this suite. Apr 28 11:21:36.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:21:36.151: INFO: namespace: e2e-tests-pods-b7h6k, resource: bindings, ignored listing per whitelist Apr 28 11:21:36.152: INFO: namespace e2e-tests-pods-b7h6k deletion completed in 6.097103694s • [SLOW TEST:12.760 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:21:36.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-fm6d5/configmap-test-6c1a1f38-8942-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 11:21:36.284: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c1acc02-8942-11ea-80e8-0242ac11000f" in namespace "e2e-tests-configmap-fm6d5" to be "success or failure" Apr 28 11:21:36.302: INFO: Pod "pod-configmaps-6c1acc02-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.276839ms Apr 28 11:21:38.307: INFO: Pod "pod-configmaps-6c1acc02-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022626564s Apr 28 11:21:40.311: INFO: Pod "pod-configmaps-6c1acc02-8942-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026910972s STEP: Saw pod success Apr 28 11:21:40.311: INFO: Pod "pod-configmaps-6c1acc02-8942-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:21:40.314: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-6c1acc02-8942-11ea-80e8-0242ac11000f container env-test: STEP: delete the pod Apr 28 11:21:40.376: INFO: Waiting for pod pod-configmaps-6c1acc02-8942-11ea-80e8-0242ac11000f to disappear Apr 28 11:21:40.404: INFO: Pod pod-configmaps-6c1acc02-8942-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:21:40.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fm6d5" for this suite. Apr 28 11:21:46.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:21:46.535: INFO: namespace: e2e-tests-configmap-fm6d5, resource: bindings, ignored listing per whitelist Apr 28 11:21:46.555: INFO: namespace e2e-tests-configmap-fm6d5 deletion completed in 6.147400314s • [SLOW TEST:10.403 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:21:46.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-gqhn STEP: Creating a pod to test atomic-volume-subpath Apr 28 11:21:46.681: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gqhn" in namespace "e2e-tests-subpath-2bswc" to be "success or failure" Apr 28 11:21:46.686: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09421ms Apr 28 11:21:48.690: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008149372s Apr 28 11:21:50.698: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016620048s Apr 28 11:21:52.703: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Running", Reason="", readiness=true. Elapsed: 6.021097902s Apr 28 11:21:54.705: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Running", Reason="", readiness=false. Elapsed: 8.023889292s Apr 28 11:21:56.709: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Running", Reason="", readiness=false. Elapsed: 10.027987011s Apr 28 11:21:58.714: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Running", Reason="", readiness=false. Elapsed: 12.032225265s Apr 28 11:22:00.718: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Running", Reason="", readiness=false. Elapsed: 14.036673519s Apr 28 11:22:02.723: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Running", Reason="", readiness=false. Elapsed: 16.041449649s Apr 28 11:22:04.728: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Running", Reason="", readiness=false. Elapsed: 18.046636883s Apr 28 11:22:06.732: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Running", Reason="", readiness=false. Elapsed: 20.05094589s Apr 28 11:22:08.736: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Running", Reason="", readiness=false. Elapsed: 22.054982845s Apr 28 11:22:10.741: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Running", Reason="", readiness=false. Elapsed: 24.059597295s Apr 28 11:22:12.745: INFO: Pod "pod-subpath-test-configmap-gqhn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.063767075s STEP: Saw pod success Apr 28 11:22:12.745: INFO: Pod "pod-subpath-test-configmap-gqhn" satisfied condition "success or failure" Apr 28 11:22:12.749: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-gqhn container test-container-subpath-configmap-gqhn: STEP: delete the pod Apr 28 11:22:12.783: INFO: Waiting for pod pod-subpath-test-configmap-gqhn to disappear Apr 28 11:22:12.798: INFO: Pod pod-subpath-test-configmap-gqhn no longer exists STEP: Deleting pod pod-subpath-test-configmap-gqhn Apr 28 11:22:12.798: INFO: Deleting pod "pod-subpath-test-configmap-gqhn" in namespace "e2e-tests-subpath-2bswc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:22:12.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-2bswc" for this suite. Apr 28 11:22:18.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:22:18.894: INFO: namespace: e2e-tests-subpath-2bswc, resource: bindings, ignored listing per whitelist Apr 28 11:22:18.952: INFO: namespace e2e-tests-subpath-2bswc deletion completed in 6.125240334s • [SLOW TEST:32.397 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:22:18.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Apr 28 11:22:19.055: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:22:24.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-lktzm" for this suite. Apr 28 11:22:30.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:22:30.828: INFO: namespace: e2e-tests-init-container-lktzm, resource: bindings, ignored listing per whitelist Apr 28 11:22:30.884: INFO: namespace e2e-tests-init-container-lktzm deletion completed in 6.100179413s • [SLOW TEST:11.931 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:22:30.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-8cb46603-8942-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 11:22:30.998: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8cb69213-8942-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-7pqxd" to be "success or failure" Apr 28 11:22:31.002: INFO: Pod "pod-projected-configmaps-8cb69213-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.848204ms Apr 28 11:22:33.007: INFO: Pod "pod-projected-configmaps-8cb69213-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008462646s Apr 28 11:22:35.011: INFO: Pod "pod-projected-configmaps-8cb69213-8942-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012721334s STEP: Saw pod success Apr 28 11:22:35.011: INFO: Pod "pod-projected-configmaps-8cb69213-8942-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:22:35.014: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-8cb69213-8942-11ea-80e8-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Apr 28 11:22:35.051: INFO: Waiting for pod pod-projected-configmaps-8cb69213-8942-11ea-80e8-0242ac11000f to disappear Apr 28 11:22:35.055: INFO: Pod pod-projected-configmaps-8cb69213-8942-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:22:35.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7pqxd" for this suite. Apr 28 11:22:41.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:22:41.108: INFO: namespace: e2e-tests-projected-7pqxd, resource: bindings, ignored listing per whitelist Apr 28 11:22:41.170: INFO: namespace e2e-tests-projected-7pqxd deletion completed in 6.111269903s • [SLOW TEST:10.286 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:22:41.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 11:22:41.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-4w7l7' Apr 28 11:22:43.609: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 28 11:22:43.609: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 28 11:22:43.648: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-68qn9] Apr 28 11:22:43.648: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-68qn9" in namespace "e2e-tests-kubectl-4w7l7" to be "running and ready" Apr 28 11:22:43.663: INFO: Pod "e2e-test-nginx-rc-68qn9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.141265ms Apr 28 11:22:45.667: INFO: Pod "e2e-test-nginx-rc-68qn9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019066328s Apr 28 11:22:47.672: INFO: Pod "e2e-test-nginx-rc-68qn9": Phase="Running", Reason="", readiness=true. Elapsed: 4.024106391s Apr 28 11:22:47.672: INFO: Pod "e2e-test-nginx-rc-68qn9" satisfied condition "running and ready" Apr 28 11:22:47.672: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-68qn9] Apr 28 11:22:47.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-4w7l7' Apr 28 11:22:47.793: INFO: stderr: "" Apr 28 11:22:47.793: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Apr 28 11:22:47.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-4w7l7' Apr 28 11:22:47.894: INFO: stderr: "" Apr 28 11:22:47.894: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:22:47.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4w7l7" for this suite. Apr 28 11:23:09.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:23:09.933: INFO: namespace: e2e-tests-kubectl-4w7l7, resource: bindings, ignored listing per whitelist Apr 28 11:23:09.992: INFO: namespace e2e-tests-kubectl-4w7l7 deletion completed in 22.081581411s • [SLOW TEST:28.821 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:23:09.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-nhvxf.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-nhvxf.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nhvxf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-nhvxf.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-nhvxf.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nhvxf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 11:23:16.224: INFO: DNS probes using e2e-tests-dns-nhvxf/dns-test-a4033024-8942-11ea-80e8-0242ac11000f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:23:16.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-nhvxf" for this suite. Apr 28 11:23:22.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:23:22.334: INFO: namespace: e2e-tests-dns-nhvxf, resource: bindings, ignored listing per whitelist Apr 28 11:23:22.390: INFO: namespace e2e-tests-dns-nhvxf deletion completed in 6.120273483s • [SLOW TEST:12.398 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:23:22.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:23:22.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-k2xx8" for this suite. Apr 28 11:23:28.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:23:28.562: INFO: namespace: e2e-tests-services-k2xx8, resource: bindings, ignored listing per whitelist Apr 28 11:23:28.611: INFO: namespace e2e-tests-services-k2xx8 deletion completed in 6.081681214s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.220 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:23:28.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:23:28.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af1dd0fc-8942-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-d4x25" to be "success or failure" Apr 28 11:23:28.739: INFO: Pod "downwardapi-volume-af1dd0fc-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351774ms Apr 28 11:23:30.744: INFO: Pod "downwardapi-volume-af1dd0fc-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009258503s Apr 28 11:23:32.749: INFO: Pod "downwardapi-volume-af1dd0fc-8942-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013716636s STEP: Saw pod success Apr 28 11:23:32.749: INFO: Pod "downwardapi-volume-af1dd0fc-8942-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:23:32.752: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-af1dd0fc-8942-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:23:32.770: INFO: Waiting for pod downwardapi-volume-af1dd0fc-8942-11ea-80e8-0242ac11000f to disappear Apr 28 11:23:32.792: INFO: Pod downwardapi-volume-af1dd0fc-8942-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:23:32.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-d4x25" for this suite. Apr 28 11:23:38.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:23:38.822: INFO: namespace: e2e-tests-downward-api-d4x25, resource: bindings, ignored listing per whitelist Apr 28 11:23:38.905: INFO: namespace e2e-tests-downward-api-d4x25 deletion completed in 6.110276735s • [SLOW TEST:10.294 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:23:38.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 28 11:23:46.213: INFO: 0 pods remaining Apr 28 11:23:46.213: INFO: 0 pods has nil DeletionTimestamp Apr 28 11:23:46.213: INFO: Apr 28 11:23:46.666: INFO: 0 pods remaining Apr 28 11:23:46.666: INFO: 0 pods has nil DeletionTimestamp Apr 28 11:23:46.666: INFO: STEP: Gathering metrics W0428 11:23:47.523408 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 11:23:47.523: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:23:47.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-vc9hz" for this suite. Apr 28 11:23:53.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:23:53.579: INFO: namespace: e2e-tests-gc-vc9hz, resource: bindings, ignored listing per whitelist Apr 28 11:23:53.638: INFO: namespace e2e-tests-gc-vc9hz deletion completed in 6.112799626s • [SLOW TEST:14.733 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:23:53.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:24:53.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bmmvk" for this suite. Apr 28 11:25:15.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:25:15.866: INFO: namespace: e2e-tests-container-probe-bmmvk, resource: bindings, ignored listing per whitelist Apr 28 11:25:15.878: INFO: namespace e2e-tests-container-probe-bmmvk deletion completed in 22.094146291s • [SLOW TEST:82.239 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:25:15.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-ef1016ca-8942-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 11:25:16.007: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef11dbab-8942-11ea-80e8-0242ac11000f" in namespace "e2e-tests-configmap-hw2x8" to be "success or failure" Apr 28 11:25:16.017: INFO: Pod "pod-configmaps-ef11dbab-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.12737ms Apr 28 11:25:18.021: INFO: Pod "pod-configmaps-ef11dbab-8942-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014357842s Apr 28 11:25:20.025: INFO: Pod "pod-configmaps-ef11dbab-8942-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018639059s STEP: Saw pod success Apr 28 11:25:20.025: INFO: Pod "pod-configmaps-ef11dbab-8942-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:25:20.029: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-ef11dbab-8942-11ea-80e8-0242ac11000f container configmap-volume-test: STEP: delete the pod Apr 28 11:25:20.070: INFO: Waiting for pod pod-configmaps-ef11dbab-8942-11ea-80e8-0242ac11000f to disappear Apr 28 11:25:20.086: INFO: Pod pod-configmaps-ef11dbab-8942-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:25:20.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hw2x8" for this suite. Apr 28 11:25:26.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:25:26.161: INFO: namespace: e2e-tests-configmap-hw2x8, resource: bindings, ignored listing per whitelist Apr 28 11:25:26.199: INFO: namespace e2e-tests-configmap-hw2x8 deletion completed in 6.108610566s • [SLOW TEST:10.321 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:25:26.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-f53b9041-8942-11ea-80e8-0242ac11000f STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-f53b9041-8942-11ea-80e8-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:26:48.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6wrtd" for this suite. Apr 28 11:27:10.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:27:10.869: INFO: namespace: e2e-tests-projected-6wrtd, resource: bindings, ignored listing per whitelist Apr 28 11:27:10.929: INFO: namespace e2e-tests-projected-6wrtd deletion completed in 22.087392867s • [SLOW TEST:104.730 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:27:10.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 11:27:11.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-g4smq' Apr 28 11:27:11.126: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 28 11:27:11.126: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Apr 28 11:27:13.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-g4smq' Apr 28 11:27:13.257: INFO: stderr: "" Apr 28 11:27:13.257: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:27:13.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g4smq" for this suite. Apr 28 11:27:19.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:27:19.295: INFO: namespace: e2e-tests-kubectl-g4smq, resource: bindings, ignored listing per whitelist Apr 28 11:27:19.412: INFO: namespace e2e-tests-kubectl-g4smq deletion completed in 6.140219958s • [SLOW TEST:8.481 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:27:19.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-5swkt Apr 28 11:27:23.526: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-5swkt STEP: checking the pod's current state and verifying that restartCount is present Apr 28 11:27:23.529: INFO: Initial restart count of pod liveness-exec is 0 Apr 28 11:28:19.645: INFO: Restart count of pod e2e-tests-container-probe-5swkt/liveness-exec is now 1 (56.115588343s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:28:19.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5swkt" for this suite. Apr 28 11:28:25.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:28:25.715: INFO: namespace: e2e-tests-container-probe-5swkt, resource: bindings, ignored listing per whitelist Apr 28 11:28:25.775: INFO: namespace e2e-tests-container-probe-5swkt deletion completed in 6.0916464s • [SLOW TEST:66.363 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:28:25.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:28:25.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-603b3118-8943-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-cj2rw" to be "success or failure" Apr 28 11:28:25.879: INFO: Pod "downwardapi-volume-603b3118-8943-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.324248ms Apr 28 11:28:27.884: INFO: Pod "downwardapi-volume-603b3118-8943-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025776659s Apr 28 11:28:29.898: INFO: Pod "downwardapi-volume-603b3118-8943-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040470381s STEP: Saw pod success Apr 28 11:28:29.898: INFO: Pod "downwardapi-volume-603b3118-8943-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:28:29.901: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-603b3118-8943-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:28:29.923: INFO: Waiting for pod downwardapi-volume-603b3118-8943-11ea-80e8-0242ac11000f to disappear Apr 28 11:28:29.928: INFO: Pod downwardapi-volume-603b3118-8943-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:28:29.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cj2rw" for this suite. Apr 28 11:28:35.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:28:36.020: INFO: namespace: e2e-tests-downward-api-cj2rw, resource: bindings, ignored listing per whitelist Apr 28 11:28:36.029: INFO: namespace e2e-tests-downward-api-cj2rw deletion completed in 6.098663145s • [SLOW TEST:10.255 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:28:36.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 28 11:28:40.143: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-66593198-8943-11ea-80e8-0242ac11000f,GenerateName:,Namespace:e2e-tests-events-mmg6j,SelfLink:/api/v1/namespaces/e2e-tests-events-mmg6j/pods/send-events-66593198-8943-11ea-80e8-0242ac11000f,UID:665b0392-8943-11ea-99e8-0242ac110002,ResourceVersion:7640632,Generation:0,CreationTimestamp:2020-04-28 11:28:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 115715081,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbfhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbfhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-mbfhq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021955d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021955f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:28:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:28:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:28:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:28:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.41,StartTime:2020-04-28 11:28:36 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-28 11:28:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://87009e193410595063c9dcc84b54b89dc2b8a18b2236933c0be0053a2e36dcd3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 28 11:28:42.150: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 28 11:28:44.154: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:28:44.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-mmg6j" for this suite. Apr 28 11:29:22.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:29:22.216: INFO: namespace: e2e-tests-events-mmg6j, resource: bindings, ignored listing per whitelist Apr 28 11:29:22.270: INFO: namespace e2e-tests-events-mmg6j deletion completed in 38.104006519s • [SLOW TEST:46.240 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:29:22.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 11:29:22.352: INFO: Creating deployment "nginx-deployment" Apr 28 11:29:22.356: INFO: Waiting for observed generation 1 Apr 28 11:29:24.381: INFO: Waiting for all required pods to come up Apr 28 11:29:24.387: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 28 11:29:32.398: INFO: Waiting for deployment "nginx-deployment" to complete Apr 28 11:29:32.403: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 28 11:29:32.409: INFO: Updating deployment nginx-deployment Apr 28 11:29:32.409: INFO: Waiting for observed generation 2 Apr 28 11:29:34.758: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 28 11:29:34.991: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 28 11:29:35.091: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 28 11:29:35.100: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 28 11:29:35.100: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 28 11:29:35.102: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 28 11:29:35.105: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 28 11:29:35.105: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 28 11:29:35.109: INFO: Updating deployment nginx-deployment Apr 28 11:29:35.109: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 28 11:29:35.286: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 28 11:29:35.320: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Apr 28 11:29:35.505: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nqxsl/deployments/nginx-deployment,UID:81e8a9c4-8943-11ea-99e8-0242ac110002,ResourceVersion:7640931,Generation:3,CreationTimestamp:2020-04-28 11:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2020-04-28 11:29:32 +0000 UTC 2020-04-28 11:29:22 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-04-28 11:29:35 +0000 UTC 2020-04-28 11:29:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 28 11:29:35.610: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nqxsl/replicasets/nginx-deployment-5c98f8fb5,UID:87e862d5-8943-11ea-99e8-0242ac110002,ResourceVersion:7640929,Generation:3,CreationTimestamp:2020-04-28 11:29:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 81e8a9c4-8943-11ea-99e8-0242ac110002 0xc001b60217 0xc001b60218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 11:29:35.610: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 28 11:29:35.611: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nqxsl/replicasets/nginx-deployment-85ddf47c5d,UID:81ec8331-8943-11ea-99e8-0242ac110002,ResourceVersion:7640976,Generation:3,CreationTimestamp:2020-04-28 11:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 81e8a9c4-8943-11ea-99e8-0242ac110002 0xc001b60387 0xc001b60388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 28 11:29:35.697: INFO: Pod "nginx-deployment-5c98f8fb5-dn5xp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dn5xp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-dn5xp,UID:88040ce1-8943-11ea-99e8-0242ac110002,ResourceVersion:7640915,Generation:0,CreationTimestamp:2020-04-28 11:29:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc00264b7b7 0xc00264b7b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264b830} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264b850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-04-28 11:29:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.697: INFO: Pod "nginx-deployment-5c98f8fb5-f8zdb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f8zdb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-f8zdb,UID:87eebf81-8943-11ea-99e8-0242ac110002,ResourceVersion:7640903,Generation:0,CreationTimestamp:2020-04-28 11:29:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc00264b910 0xc00264b911}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264b990} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264b9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-04-28 11:29:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.698: INFO: Pod "nginx-deployment-5c98f8fb5-gms82" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gms82,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-gms82,UID:89b7c590-8943-11ea-99e8-0242ac110002,ResourceVersion:7640971,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc00264ba70 0xc00264ba71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264baf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264bb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.698: INFO: Pod "nginx-deployment-5c98f8fb5-hqc2v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hqc2v,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-hqc2v,UID:89b7c998-8943-11ea-99e8-0242ac110002,ResourceVersion:7640972,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc00264bb87 0xc00264bb88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264bc00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264bc20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.698: INFO: Pod "nginx-deployment-5c98f8fb5-n7478" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n7478,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-n7478,UID:89aa13b4-8943-11ea-99e8-0242ac110002,ResourceVersion:7640964,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc00264bc97 0xc00264bc98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264bd10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264bd30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.698: INFO: Pod "nginx-deployment-5c98f8fb5-plzmz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-plzmz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-plzmz,UID:89aa035f-8943-11ea-99e8-0242ac110002,ResourceVersion:7640961,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc00264bda7 0xc00264bda8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264be20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264be40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.699: INFO: Pod "nginx-deployment-5c98f8fb5-qvdkc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qvdkc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-qvdkc,UID:8809cc07-8943-11ea-99e8-0242ac110002,ResourceVersion:7640918,Generation:0,CreationTimestamp:2020-04-28 11:29:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc00264beb7 0xc00264beb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264bf30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264bf50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-04-28 11:29:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.699: INFO: Pod "nginx-deployment-5c98f8fb5-sh9bd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sh9bd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-sh9bd,UID:87ec905e-8943-11ea-99e8-0242ac110002,ResourceVersion:7640894,Generation:0,CreationTimestamp:2020-04-28 11:29:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc0025ce010 0xc0025ce011}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ce090} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ce0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-04-28 11:29:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.699: INFO: Pod "nginx-deployment-5c98f8fb5-sr9mt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sr9mt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-sr9mt,UID:89a34d78-8943-11ea-99e8-0242ac110002,ResourceVersion:7640987,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc0025ce1a0 0xc0025ce1a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ce240} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ce260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-04-28 11:29:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.699: INFO: Pod "nginx-deployment-5c98f8fb5-szhr6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-szhr6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-szhr6,UID:89b7bcb1-8943-11ea-99e8-0242ac110002,ResourceVersion:7640974,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc0025ce320 0xc0025ce321}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ce3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ce3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.700: INFO: Pod "nginx-deployment-5c98f8fb5-v9kp6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v9kp6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-v9kp6,UID:87eec807-8943-11ea-99e8-0242ac110002,ResourceVersion:7640916,Generation:0,CreationTimestamp:2020-04-28 11:29:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc0025ce437 0xc0025ce438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ce4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ce4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-04-28 11:29:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.700: INFO: Pod "nginx-deployment-5c98f8fb5-xh6j6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xh6j6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-xh6j6,UID:89c0f44b-8943-11ea-99e8-0242ac110002,ResourceVersion:7640983,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc0025ce5b0 0xc0025ce5b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ce630} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ce650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.700: INFO: Pod "nginx-deployment-5c98f8fb5-zk4qs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zk4qs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-5c98f8fb5-zk4qs,UID:89b7d790-8943-11ea-99e8-0242ac110002,ResourceVersion:7640975,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 87e862d5-8943-11ea-99e8-0242ac110002 0xc0025ce6c7 0xc0025ce6c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ce740} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ce760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.700: INFO: Pod "nginx-deployment-85ddf47c5d-4pt6p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4pt6p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-4pt6p,UID:89a9e54a-8943-11ea-99e8-0242ac110002,ResourceVersion:7640950,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025ce7d7 0xc0025ce7d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ce850} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ce870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.701: INFO: Pod "nginx-deployment-85ddf47c5d-5jfjz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5jfjz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-5jfjz,UID:89aa150b-8943-11ea-99e8-0242ac110002,ResourceVersion:7640962,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025ce8e7 0xc0025ce8e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ce960} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ce980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.701: INFO: Pod "nginx-deployment-85ddf47c5d-74hfp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-74hfp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-74hfp,UID:81f972aa-8943-11ea-99e8-0242ac110002,ResourceVersion:7640836,Generation:0,CreationTimestamp:2020-04-28 11:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025ce9f7 0xc0025ce9f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cea70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cea90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.165,StartTime:2020-04-28 11:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 11:29:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://01f611e6c0dcb430df1f45e8b3e93f88bbf073649433ac6abe559c8c260b29b8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.701: INFO: Pod "nginx-deployment-85ddf47c5d-7psc6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7psc6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-7psc6,UID:89b79f24-8943-11ea-99e8-0242ac110002,ResourceVersion:7640973,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025ceb57 0xc0025ceb58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cebd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cebf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.701: INFO: Pod "nginx-deployment-85ddf47c5d-c796f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c796f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-c796f,UID:81f47b95-8943-11ea-99e8-0242ac110002,ResourceVersion:7640803,Generation:0,CreationTimestamp:2020-04-28 11:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cec67 0xc0025cec68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cece0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ced00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.164,StartTime:2020-04-28 11:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 11:29:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ac82f83a51adb4d6e6bb8777a6e9d9aa94c876e621f537c3d63d210e11502bb7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.702: INFO: Pod "nginx-deployment-85ddf47c5d-cgxlp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cgxlp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-cgxlp,UID:89b7a404-8943-11ea-99e8-0242ac110002,ResourceVersion:7640969,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cedc7 0xc0025cedc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cee40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cee60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.702: INFO: Pod "nginx-deployment-85ddf47c5d-f22sx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f22sx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-f22sx,UID:89a37940-8943-11ea-99e8-0242ac110002,ResourceVersion:7640942,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025ceed7 0xc0025ceed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cef50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cef70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.702: INFO: Pod "nginx-deployment-85ddf47c5d-f8hc6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f8hc6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-f8hc6,UID:81f97bb0-8943-11ea-99e8-0242ac110002,ResourceVersion:7640853,Generation:0,CreationTimestamp:2020-04-28 11:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cefe7 0xc0025cefe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cf060} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cf080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.167,StartTime:2020-04-28 11:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 11:29:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://35e233a8e7a4fc91c054e6e53bf78885171f5887c85edae2787180739145ceab}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.702: INFO: Pod "nginx-deployment-85ddf47c5d-fgkn6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fgkn6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-fgkn6,UID:899da5ae-8943-11ea-99e8-0242ac110002,ResourceVersion:7640979,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cf147 0xc0025cf148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cf1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cf1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-04-28 11:29:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.702: INFO: Pod "nginx-deployment-85ddf47c5d-h7lcs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h7lcs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-h7lcs,UID:89b78d13-8943-11ea-99e8-0242ac110002,ResourceVersion:7640970,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cf297 0xc0025cf298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cf310} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cf330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.702: INFO: Pod "nginx-deployment-85ddf47c5d-hjs82" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hjs82,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-hjs82,UID:89aa1a40-8943-11ea-99e8-0242ac110002,ResourceVersion:7640963,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cf3a7 0xc0025cf3a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cf420} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cf440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.703: INFO: Pod "nginx-deployment-85ddf47c5d-jp6tl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jp6tl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-jp6tl,UID:89b790fb-8943-11ea-99e8-0242ac110002,ResourceVersion:7640966,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cf4b7 0xc0025cf4b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cf530} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cf550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.703: INFO: Pod "nginx-deployment-85ddf47c5d-l5nsx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l5nsx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-l5nsx,UID:89b77fa2-8943-11ea-99e8-0242ac110002,ResourceVersion:7640965,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cf5c7 0xc0025cf5c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cf640} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cf660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.703: INFO: Pod "nginx-deployment-85ddf47c5d-l9zvz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l9zvz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-l9zvz,UID:81f519dc-8943-11ea-99e8-0242ac110002,ResourceVersion:7640822,Generation:0,CreationTimestamp:2020-04-28 11:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cf6d7 0xc0025cf6d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cf750} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cf770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.42,StartTime:2020-04-28 11:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 11:29:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://54c8cdfae93be5443e9eda05ee321573b144ca5173e7a552451980fb645258b4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.703: INFO: Pod "nginx-deployment-85ddf47c5d-lpd7d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lpd7d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-lpd7d,UID:89a9ff9e-8943-11ea-99e8-0242ac110002,ResourceVersion:7640960,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cf837 0xc0025cf838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cf8b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cf8d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.703: INFO: Pod "nginx-deployment-85ddf47c5d-pdjp9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pdjp9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-pdjp9,UID:89a3744b-8943-11ea-99e8-0242ac110002,ResourceVersion:7640946,Generation:0,CreationTimestamp:2020-04-28 11:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cf947 0xc0025cf948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cf9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cf9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.703: INFO: Pod "nginx-deployment-85ddf47c5d-qmjhv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qmjhv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-qmjhv,UID:81f51d00-8943-11ea-99e8-0242ac110002,ResourceVersion:7640830,Generation:0,CreationTimestamp:2020-04-28 11:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cfb07 0xc0025cfb08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cfb80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cfba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.43,StartTime:2020-04-28 11:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 11:29:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8852fb16eb5d51df127a1fdaff8ee4541ec6b956b58cfa08a275b55a3f6626b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.704: INFO: Pod "nginx-deployment-85ddf47c5d-qs2p9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qs2p9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-qs2p9,UID:81f97dbf-8943-11ea-99e8-0242ac110002,ResourceVersion:7640845,Generation:0,CreationTimestamp:2020-04-28 11:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cfc67 0xc0025cfc68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cfce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cfd00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.44,StartTime:2020-04-28 11:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 11:29:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://082163a2877d904ec5ebad027517e374fff1516d4e59e4a98039d3cec544629d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.704: INFO: Pod "nginx-deployment-85ddf47c5d-r9mdl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r9mdl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-r9mdl,UID:81f9768c-8943-11ea-99e8-0242ac110002,ResourceVersion:7640850,Generation:0,CreationTimestamp:2020-04-28 11:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cfdc7 0xc0025cfdc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cfe40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cfe60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.166,StartTime:2020-04-28 11:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 11:29:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1a7f5a5b9915d84f97276feaa7736b659b837452fd3994a32da2c4bba49d7355}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 11:29:35.704: INFO: Pod "nginx-deployment-85ddf47c5d-xdm9x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xdm9x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nqxsl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nqxsl/pods/nginx-deployment-85ddf47c5d-xdm9x,UID:81ff295d-8943-11ea-99e8-0242ac110002,ResourceVersion:7640864,Generation:0,CreationTimestamp:2020-04-28 11:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 81ec8331-8943-11ea-99e8-0242ac110002 0xc0025cff27 0xc0025cff28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-btf6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-btf6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-btf6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025cffa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025cffc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:29:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.46,StartTime:2020-04-28 11:29:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 11:29:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://97ec31ca7987051c3c67346290f0a769e06ceb69416aa268bca12acb97673c71}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:29:35.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-nqxsl" for this suite. Apr 28 11:29:57.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:29:57.913: INFO: namespace: e2e-tests-deployment-nqxsl, resource: bindings, ignored listing per whitelist Apr 28 11:29:57.971: INFO: namespace e2e-tests-deployment-nqxsl deletion completed in 22.211346082s • [SLOW TEST:35.701 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:29:57.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 28 11:30:06.137: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:06.142: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:08.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:08.147: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:10.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:10.146: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:12.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:12.147: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:14.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:14.147: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:16.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:16.147: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:18.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:18.147: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:20.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:20.146: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:22.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:22.147: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:24.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:24.147: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:26.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:26.147: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:28.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:28.147: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:30.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:30.146: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 11:30:32.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 11:30:32.146: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:30:32.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-z8sz8" for this suite. Apr 28 11:30:54.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:30:54.247: INFO: namespace: e2e-tests-container-lifecycle-hook-z8sz8, resource: bindings, ignored listing per whitelist Apr 28 11:30:54.261: INFO: namespace e2e-tests-container-lifecycle-hook-z8sz8 deletion completed in 22.100331418s • [SLOW TEST:56.290 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:30:54.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 11:30:54.376: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 28 11:30:54.382: INFO: Number of nodes with available pods: 0 Apr 28 11:30:54.382: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 28 11:30:54.449: INFO: Number of nodes with available pods: 0 Apr 28 11:30:54.449: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:30:55.453: INFO: Number of nodes with available pods: 0 Apr 28 11:30:55.453: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:30:56.454: INFO: Number of nodes with available pods: 0 Apr 28 11:30:56.454: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:30:57.548: INFO: Number of nodes with available pods: 1 Apr 28 11:30:57.548: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 28 11:30:57.597: INFO: Number of nodes with available pods: 1 Apr 28 11:30:57.597: INFO: Number of running nodes: 0, number of available pods: 1 Apr 28 11:30:58.602: INFO: Number of nodes with available pods: 0 Apr 28 11:30:58.602: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 28 11:30:58.655: INFO: Number of nodes with available pods: 0 Apr 28 11:30:58.655: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:30:59.659: INFO: Number of nodes with available pods: 0 Apr 28 11:30:59.659: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:00.659: INFO: Number of nodes with available pods: 0 Apr 28 11:31:00.659: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:01.659: INFO: Number of nodes with available pods: 0 Apr 28 11:31:01.659: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:02.660: INFO: Number of nodes with available pods: 0 Apr 28 11:31:02.660: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:03.660: INFO: Number of nodes with available pods: 0 Apr 28 11:31:03.660: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:04.660: INFO: Number of nodes with available pods: 0 Apr 28 11:31:04.660: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:05.659: INFO: Number of nodes with available pods: 0 Apr 28 11:31:05.659: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:06.660: INFO: Number of nodes with available pods: 0 Apr 28 11:31:06.660: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:07.659: INFO: Number of nodes with available pods: 0 Apr 28 11:31:07.659: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:08.659: INFO: Number of nodes with available pods: 0 Apr 28 11:31:08.659: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:09.659: INFO: Number of nodes with available pods: 0 Apr 28 11:31:09.659: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:10.659: INFO: Number of nodes with available pods: 0 Apr 28 11:31:10.659: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:11.659: INFO: Number of nodes with available pods: 0 Apr 28 11:31:11.659: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:12.659: INFO: Number of nodes with available pods: 0 Apr 28 11:31:12.660: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:13.659: INFO: Number of nodes with available pods: 0 Apr 28 11:31:13.659: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:31:14.659: INFO: Number of nodes with available pods: 1 Apr 28 11:31:14.659: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lnlmt, will wait for the garbage collector to delete the pods Apr 28 11:31:14.724: INFO: Deleting DaemonSet.extensions daemon-set took: 6.702437ms Apr 28 11:31:14.824: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.240573ms Apr 28 11:31:21.327: INFO: Number of nodes with available pods: 0 Apr 28 11:31:21.327: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 11:31:21.330: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lnlmt/daemonsets","resourceVersion":"7641481"},"items":null} Apr 28 11:31:21.332: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lnlmt/pods","resourceVersion":"7641481"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:31:21.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-lnlmt" for this suite. Apr 28 11:31:27.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:31:27.405: INFO: namespace: e2e-tests-daemonsets-lnlmt, resource: bindings, ignored listing per whitelist Apr 28 11:31:27.453: INFO: namespace e2e-tests-daemonsets-lnlmt deletion completed in 6.091878958s • [SLOW TEST:33.192 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:31:27.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Apr 28 11:31:27.532: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 11:31:27.548: INFO: Waiting for terminating namespaces to be deleted... Apr 28 11:31:27.550: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Apr 28 11:31:27.558: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Apr 28 11:31:27.558: INFO: Container coredns ready: true, restart count 0 Apr 28 11:31:27.558: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Apr 28 11:31:27.558: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 11:31:27.558: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Apr 28 11:31:27.558: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 11:31:27.558: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Apr 28 11:31:27.562: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Apr 28 11:31:27.562: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 11:31:27.562: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Apr 28 11:31:27.562: INFO: Container coredns ready: true, restart count 0 Apr 28 11:31:27.562: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Apr 28 11:31:27.562: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-cef0f2ab-8943-11ea-80e8-0242ac11000f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-cef0f2ab-8943-11ea-80e8-0242ac11000f off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-cef0f2ab-8943-11ea-80e8-0242ac11000f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:31:35.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-sx8kk" for this suite. Apr 28 11:31:45.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:31:45.748: INFO: namespace: e2e-tests-sched-pred-sx8kk, resource: bindings, ignored listing per whitelist Apr 28 11:31:45.796: INFO: namespace e2e-tests-sched-pred-sx8kk deletion completed in 10.089216463s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:18.343 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:31:45.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Apr 28 11:31:45.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 28 11:31:45.988: INFO: stderr: "" Apr 28 11:31:45.988: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:31:45.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c7qd4" for this suite. Apr 28 11:31:52.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:31:52.052: INFO: namespace: e2e-tests-kubectl-c7qd4, resource: bindings, ignored listing per whitelist Apr 28 11:31:52.076: INFO: namespace e2e-tests-kubectl-c7qd4 deletion completed in 6.084504243s • [SLOW TEST:6.279 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:31:52.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:31:52.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db37c683-8943-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-fgv5t" to be "success or failure" Apr 28 11:31:52.203: INFO: Pod "downwardapi-volume-db37c683-8943-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108345ms Apr 28 11:31:54.207: INFO: Pod "downwardapi-volume-db37c683-8943-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011876756s Apr 28 11:31:56.211: INFO: Pod "downwardapi-volume-db37c683-8943-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015491087s STEP: Saw pod success Apr 28 11:31:56.211: INFO: Pod "downwardapi-volume-db37c683-8943-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:31:56.213: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-db37c683-8943-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:31:56.228: INFO: Waiting for pod downwardapi-volume-db37c683-8943-11ea-80e8-0242ac11000f to disappear Apr 28 11:31:56.234: INFO: Pod downwardapi-volume-db37c683-8943-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:31:56.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fgv5t" for this suite. Apr 28 11:32:02.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:32:02.283: INFO: namespace: e2e-tests-downward-api-fgv5t, resource: bindings, ignored listing per whitelist Apr 28 11:32:02.344: INFO: namespace e2e-tests-downward-api-fgv5t deletion completed in 6.106603876s • [SLOW TEST:10.267 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:32:02.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 11:32:22.534: INFO: Container started at 2020-04-28 11:32:04 +0000 UTC, pod became ready at 2020-04-28 11:32:21 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:32:22.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rh6h9" for this suite. Apr 28 11:32:44.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:32:44.596: INFO: namespace: e2e-tests-container-probe-rh6h9, resource: bindings, ignored listing per whitelist Apr 28 11:32:44.626: INFO: namespace e2e-tests-container-probe-rh6h9 deletion completed in 22.088311721s • [SLOW TEST:42.282 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:32:44.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-v6dfk Apr 28 11:32:48.736: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-v6dfk STEP: checking the pod's current state and verifying that restartCount is present Apr 28 11:32:48.739: INFO: Initial restart count of pod liveness-http is 0 Apr 28 11:33:10.785: INFO: Restart count of pod e2e-tests-container-probe-v6dfk/liveness-http is now 1 (22.046864234s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:33:10.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-v6dfk" for this suite. Apr 28 11:33:16.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:33:16.919: INFO: namespace: e2e-tests-container-probe-v6dfk, resource: bindings, ignored listing per whitelist Apr 28 11:33:16.923: INFO: namespace e2e-tests-container-probe-v6dfk deletion completed in 6.094676061s • [SLOW TEST:32.297 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:33:16.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-q6lkc/secret-test-0dc544fe-8944-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 11:33:17.028: INFO: Waiting up to 5m0s for pod "pod-configmaps-0dc7c4f2-8944-11ea-80e8-0242ac11000f" in namespace "e2e-tests-secrets-q6lkc" to be "success or failure" Apr 28 11:33:17.032: INFO: Pod "pod-configmaps-0dc7c4f2-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.990907ms Apr 28 11:33:19.064: INFO: Pod "pod-configmaps-0dc7c4f2-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036659233s Apr 28 11:33:21.068: INFO: Pod "pod-configmaps-0dc7c4f2-8944-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040682484s STEP: Saw pod success Apr 28 11:33:21.068: INFO: Pod "pod-configmaps-0dc7c4f2-8944-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:33:21.071: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-0dc7c4f2-8944-11ea-80e8-0242ac11000f container env-test: STEP: delete the pod Apr 28 11:33:21.087: INFO: Waiting for pod pod-configmaps-0dc7c4f2-8944-11ea-80e8-0242ac11000f to disappear Apr 28 11:33:21.102: INFO: Pod pod-configmaps-0dc7c4f2-8944-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:33:21.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-q6lkc" for this suite. Apr 28 11:33:27.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:33:27.170: INFO: namespace: e2e-tests-secrets-q6lkc, resource: bindings, ignored listing per whitelist Apr 28 11:33:27.227: INFO: namespace e2e-tests-secrets-q6lkc deletion completed in 6.090382547s • [SLOW TEST:10.304 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:33:27.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:33:33.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-9xzth" for this suite. Apr 28 11:33:39.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:33:39.599: INFO: namespace: e2e-tests-namespaces-9xzth, resource: bindings, ignored listing per whitelist Apr 28 11:33:39.660: INFO: namespace e2e-tests-namespaces-9xzth deletion completed in 6.118747834s STEP: Destroying namespace "e2e-tests-nsdeletetest-65ltx" for this suite. Apr 28 11:33:39.662: INFO: Namespace e2e-tests-nsdeletetest-65ltx was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-gm2fh" for this suite. Apr 28 11:33:45.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:33:45.778: INFO: namespace: e2e-tests-nsdeletetest-gm2fh, resource: bindings, ignored listing per whitelist Apr 28 11:33:45.800: INFO: namespace e2e-tests-nsdeletetest-gm2fh deletion completed in 6.13837601s • [SLOW TEST:18.573 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:33:45.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:33:45.926: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1eff16dc-8944-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-mhfbw" to be "success or failure" Apr 28 11:33:45.955: INFO: Pod "downwardapi-volume-1eff16dc-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.638989ms Apr 28 11:33:47.959: INFO: Pod "downwardapi-volume-1eff16dc-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032557325s Apr 28 11:33:49.963: INFO: Pod "downwardapi-volume-1eff16dc-8944-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036301921s STEP: Saw pod success Apr 28 11:33:49.963: INFO: Pod "downwardapi-volume-1eff16dc-8944-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:33:49.965: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1eff16dc-8944-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:33:49.995: INFO: Waiting for pod downwardapi-volume-1eff16dc-8944-11ea-80e8-0242ac11000f to disappear Apr 28 11:33:50.008: INFO: Pod downwardapi-volume-1eff16dc-8944-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:33:50.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mhfbw" for this suite. Apr 28 11:33:56.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:33:56.084: INFO: namespace: e2e-tests-downward-api-mhfbw, resource: bindings, ignored listing per whitelist Apr 28 11:33:56.128: INFO: namespace e2e-tests-downward-api-mhfbw deletion completed in 6.098698811s • [SLOW TEST:10.328 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:33:56.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-252bff75-8944-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 11:33:56.274: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-252c9514-8944-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-kzlnl" to be "success or failure" Apr 28 11:33:56.339: INFO: Pod "pod-projected-configmaps-252c9514-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 65.309785ms Apr 28 11:33:58.343: INFO: Pod "pod-projected-configmaps-252c9514-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06938494s Apr 28 11:34:00.347: INFO: Pod "pod-projected-configmaps-252c9514-8944-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073254915s STEP: Saw pod success Apr 28 11:34:00.347: INFO: Pod "pod-projected-configmaps-252c9514-8944-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:34:00.350: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-252c9514-8944-11ea-80e8-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Apr 28 11:34:00.382: INFO: Waiting for pod pod-projected-configmaps-252c9514-8944-11ea-80e8-0242ac11000f to disappear Apr 28 11:34:00.472: INFO: Pod pod-projected-configmaps-252c9514-8944-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:34:00.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kzlnl" for this suite. Apr 28 11:34:06.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:34:06.522: INFO: namespace: e2e-tests-projected-kzlnl, resource: bindings, ignored listing per whitelist Apr 28 11:34:06.564: INFO: namespace e2e-tests-projected-kzlnl deletion completed in 6.088440906s • [SLOW TEST:10.436 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:34:06.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0428 11:34:07.740962 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 11:34:07.741: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:34:07.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-bgwx7" for this suite. Apr 28 11:34:13.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:34:13.829: INFO: namespace: e2e-tests-gc-bgwx7, resource: bindings, ignored listing per whitelist Apr 28 11:34:13.832: INFO: namespace e2e-tests-gc-bgwx7 deletion completed in 6.088742976s • [SLOW TEST:7.268 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:34:13.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Apr 28 11:34:13.966: INFO: Waiting up to 5m0s for pod "downward-api-2fb4ecb1-8944-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-76sl2" to be "success or failure" Apr 28 11:34:13.979: INFO: Pod "downward-api-2fb4ecb1-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.141396ms Apr 28 11:34:15.983: INFO: Pod "downward-api-2fb4ecb1-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017098209s Apr 28 11:34:17.986: INFO: Pod "downward-api-2fb4ecb1-8944-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020636233s STEP: Saw pod success Apr 28 11:34:17.986: INFO: Pod "downward-api-2fb4ecb1-8944-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:34:17.988: INFO: Trying to get logs from node hunter-worker2 pod downward-api-2fb4ecb1-8944-11ea-80e8-0242ac11000f container dapi-container: STEP: delete the pod Apr 28 11:34:18.004: INFO: Waiting for pod downward-api-2fb4ecb1-8944-11ea-80e8-0242ac11000f to disappear Apr 28 11:34:18.015: INFO: Pod downward-api-2fb4ecb1-8944-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:34:18.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-76sl2" for this suite. Apr 28 11:34:24.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:34:24.104: INFO: namespace: e2e-tests-downward-api-76sl2, resource: bindings, ignored listing per whitelist Apr 28 11:34:24.111: INFO: namespace e2e-tests-downward-api-76sl2 deletion completed in 6.093327733s • [SLOW TEST:10.278 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:34:24.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:34:24.209: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35d162a7-8944-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-d9xh9" to be "success or failure" Apr 28 11:34:24.213: INFO: Pod "downwardapi-volume-35d162a7-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141815ms Apr 28 11:34:26.217: INFO: Pod "downwardapi-volume-35d162a7-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008029179s Apr 28 11:34:28.221: INFO: Pod "downwardapi-volume-35d162a7-8944-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012199204s STEP: Saw pod success Apr 28 11:34:28.221: INFO: Pod "downwardapi-volume-35d162a7-8944-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:34:28.224: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-35d162a7-8944-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:34:28.257: INFO: Waiting for pod downwardapi-volume-35d162a7-8944-11ea-80e8-0242ac11000f to disappear Apr 28 11:34:28.273: INFO: Pod downwardapi-volume-35d162a7-8944-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:34:28.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d9xh9" for this suite. Apr 28 11:34:34.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:34:34.385: INFO: namespace: e2e-tests-projected-d9xh9, resource: bindings, ignored listing per whitelist Apr 28 11:34:34.416: INFO: namespace e2e-tests-projected-d9xh9 deletion completed in 6.139895771s • [SLOW TEST:10.305 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:34:34.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 28 11:34:34.532: INFO: Waiting up to 5m0s for pod "pod-3bf93e35-8944-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-wpfql" to be "success or failure" Apr 28 11:34:34.536: INFO: Pod "pod-3bf93e35-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.769381ms Apr 28 11:34:36.540: INFO: Pod "pod-3bf93e35-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008004601s Apr 28 11:34:38.544: INFO: Pod "pod-3bf93e35-8944-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011375607s STEP: Saw pod success Apr 28 11:34:38.544: INFO: Pod "pod-3bf93e35-8944-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:34:38.546: INFO: Trying to get logs from node hunter-worker2 pod pod-3bf93e35-8944-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:34:38.584: INFO: Waiting for pod pod-3bf93e35-8944-11ea-80e8-0242ac11000f to disappear Apr 28 11:34:38.596: INFO: Pod pod-3bf93e35-8944-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:34:38.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wpfql" for this suite. Apr 28 11:34:44.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:34:44.623: INFO: namespace: e2e-tests-emptydir-wpfql, resource: bindings, ignored listing per whitelist Apr 28 11:34:44.693: INFO: namespace e2e-tests-emptydir-wpfql deletion completed in 6.093673376s • [SLOW TEST:10.276 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:34:44.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-vqxr5/configmap-test-42197ea0-8944-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 11:34:44.843: INFO: Waiting up to 5m0s for pod "pod-configmaps-421faa49-8944-11ea-80e8-0242ac11000f" in namespace "e2e-tests-configmap-vqxr5" to be "success or failure" Apr 28 11:34:44.861: INFO: Pod "pod-configmaps-421faa49-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.46196ms Apr 28 11:34:46.865: INFO: Pod "pod-configmaps-421faa49-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021284979s Apr 28 11:34:48.868: INFO: Pod "pod-configmaps-421faa49-8944-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024952575s STEP: Saw pod success Apr 28 11:34:48.868: INFO: Pod "pod-configmaps-421faa49-8944-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:34:48.871: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-421faa49-8944-11ea-80e8-0242ac11000f container env-test: STEP: delete the pod Apr 28 11:34:48.952: INFO: Waiting for pod pod-configmaps-421faa49-8944-11ea-80e8-0242ac11000f to disappear Apr 28 11:34:48.962: INFO: Pod pod-configmaps-421faa49-8944-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:34:48.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vqxr5" for this suite. Apr 28 11:34:54.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:34:55.003: INFO: namespace: e2e-tests-configmap-vqxr5, resource: bindings, ignored listing per whitelist Apr 28 11:34:55.064: INFO: namespace e2e-tests-configmap-vqxr5 deletion completed in 6.099017706s • [SLOW TEST:10.371 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:34:55.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:34:59.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-gjst5" for this suite. Apr 28 11:35:45.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:35:45.259: INFO: namespace: e2e-tests-kubelet-test-gjst5, resource: bindings, ignored listing per whitelist Apr 28 11:35:45.296: INFO: namespace e2e-tests-kubelet-test-gjst5 deletion completed in 46.111702252s • [SLOW TEST:50.232 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:35:45.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6635c6f9-8944-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 11:35:45.452: INFO: Waiting up to 5m0s for pod "pod-secrets-66362cd3-8944-11ea-80e8-0242ac11000f" in namespace "e2e-tests-secrets-s8jvz" to be "success or failure" Apr 28 11:35:45.460: INFO: Pod "pod-secrets-66362cd3-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129311ms Apr 28 11:35:47.464: INFO: Pod "pod-secrets-66362cd3-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011463765s Apr 28 11:35:49.468: INFO: Pod "pod-secrets-66362cd3-8944-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01570966s STEP: Saw pod success Apr 28 11:35:49.468: INFO: Pod "pod-secrets-66362cd3-8944-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:35:49.471: INFO: Trying to get logs from node hunter-worker pod pod-secrets-66362cd3-8944-11ea-80e8-0242ac11000f container secret-volume-test: STEP: delete the pod Apr 28 11:35:49.503: INFO: Waiting for pod pod-secrets-66362cd3-8944-11ea-80e8-0242ac11000f to disappear Apr 28 11:35:49.519: INFO: Pod pod-secrets-66362cd3-8944-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:35:49.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-s8jvz" for this suite. Apr 28 11:35:55.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:35:55.574: INFO: namespace: e2e-tests-secrets-s8jvz, resource: bindings, ignored listing per whitelist Apr 28 11:35:55.633: INFO: namespace e2e-tests-secrets-s8jvz deletion completed in 6.109937105s • [SLOW TEST:10.337 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:35:55.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-6c5d80b9-8944-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 11:35:55.744: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c6087ed-8944-11ea-80e8-0242ac11000f" in namespace "e2e-tests-configmap-z9pkf" to be "success or failure" Apr 28 11:35:55.748: INFO: Pod "pod-configmaps-6c6087ed-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178885ms Apr 28 11:35:57.751: INFO: Pod "pod-configmaps-6c6087ed-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007761286s Apr 28 11:35:59.756: INFO: Pod "pod-configmaps-6c6087ed-8944-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012262691s STEP: Saw pod success Apr 28 11:35:59.756: INFO: Pod "pod-configmaps-6c6087ed-8944-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:35:59.759: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-6c6087ed-8944-11ea-80e8-0242ac11000f container configmap-volume-test: STEP: delete the pod Apr 28 11:35:59.801: INFO: Waiting for pod pod-configmaps-6c6087ed-8944-11ea-80e8-0242ac11000f to disappear Apr 28 11:35:59.856: INFO: Pod pod-configmaps-6c6087ed-8944-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:35:59.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-z9pkf" for this suite. Apr 28 11:36:05.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:36:05.971: INFO: namespace: e2e-tests-configmap-z9pkf, resource: bindings, ignored listing per whitelist Apr 28 11:36:05.978: INFO: namespace e2e-tests-configmap-z9pkf deletion completed in 6.117008772s • [SLOW TEST:10.344 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:36:05.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4zsrr A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4zsrr;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4zsrr A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4zsrr.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4zsrr.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4zsrr.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4zsrr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4zsrr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4zsrr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4zsrr.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4zsrr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 59.196.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.196.59_udp@PTR;check="$$(dig +tcp +noall +answer +search 59.196.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.196.59_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4zsrr A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4zsrr;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4zsrr A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4zsrr.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4zsrr.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4zsrr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4zsrr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4zsrr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4zsrr.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4zsrr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 59.196.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.196.59_udp@PTR;check="$$(dig +tcp +noall +answer +search 59.196.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.196.59_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 11:36:12.220: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.223: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.229: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.236: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.239: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.264: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.267: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.269: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.272: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.275: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.277: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.280: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.283: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:12.301: INFO: Lookups using e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-4zsrr jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc] Apr 28 11:36:17.307: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.310: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.317: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.324: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.327: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.352: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.356: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.359: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.362: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.365: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.368: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.371: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.373: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:17.387: INFO: Lookups using e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-4zsrr jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc] Apr 28 11:36:22.307: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.310: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.317: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.322: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.325: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.350: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.352: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.355: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.358: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.360: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.363: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.365: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.368: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:22.384: INFO: Lookups using e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-4zsrr jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc] Apr 28 11:36:27.307: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.310: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.316: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.322: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.324: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.348: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.350: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.354: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.357: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.360: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.362: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.365: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.368: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:27.386: INFO: Lookups using e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-4zsrr jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc] Apr 28 11:36:32.307: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.310: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.317: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.323: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.325: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.350: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.353: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.356: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.358: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.361: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.363: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.365: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.368: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:32.391: INFO: Lookups using e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-4zsrr jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc] Apr 28 11:36:37.306: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.310: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.316: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.322: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.324: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.348: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.351: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.354: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.356: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.358: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.360: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.363: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.365: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc from pod e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f: the server could not find the requested resource (get pods dns-test-729979fa-8944-11ea-80e8-0242ac11000f) Apr 28 11:36:37.380: INFO: Lookups using e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr wheezy_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-4zsrr jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr jessie_udp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@dns-test-service.e2e-tests-dns-4zsrr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4zsrr.svc] Apr 28 11:36:42.407: INFO: DNS probes using e2e-tests-dns-4zsrr/dns-test-729979fa-8944-11ea-80e8-0242ac11000f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:36:42.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-4zsrr" for this suite. Apr 28 11:36:48.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:36:48.764: INFO: namespace: e2e-tests-dns-4zsrr, resource: bindings, ignored listing per whitelist Apr 28 11:36:48.817: INFO: namespace e2e-tests-dns-4zsrr deletion completed in 6.152259061s • [SLOW TEST:42.839 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:36:48.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Apr 28 11:36:48.906: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 11:36:48.926: INFO: Waiting for terminating namespaces to be deleted... Apr 28 11:36:48.929: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Apr 28 11:36:48.936: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Apr 28 11:36:48.936: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 11:36:48.936: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Apr 28 11:36:48.936: INFO: Container coredns ready: true, restart count 0 Apr 28 11:36:48.936: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Apr 28 11:36:48.936: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 11:36:48.936: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Apr 28 11:36:48.941: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Apr 28 11:36:48.941: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 11:36:48.941: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Apr 28 11:36:48.941: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 11:36:48.941: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Apr 28 11:36:48.941: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1609f8b71d34986b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:36:49.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-llgcq" for this suite. Apr 28 11:36:56.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:36:56.037: INFO: namespace: e2e-tests-sched-pred-llgcq, resource: bindings, ignored listing per whitelist Apr 28 11:36:56.079: INFO: namespace e2e-tests-sched-pred-llgcq deletion completed in 6.09278253s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.261 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:36:56.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-90735bb9-8944-11ea-80e8-0242ac11000f STEP: Creating secret with name s-test-opt-upd-90735c3e-8944-11ea-80e8-0242ac11000f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-90735bb9-8944-11ea-80e8-0242ac11000f STEP: Updating secret s-test-opt-upd-90735c3e-8944-11ea-80e8-0242ac11000f STEP: Creating secret with name s-test-opt-create-90735c7e-8944-11ea-80e8-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:38:24.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ncr6t" for this suite. Apr 28 11:38:46.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:38:46.940: INFO: namespace: e2e-tests-secrets-ncr6t, resource: bindings, ignored listing per whitelist Apr 28 11:38:47.003: INFO: namespace e2e-tests-secrets-ncr6t deletion completed in 22.128215649s • [SLOW TEST:110.924 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:38:47.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 28 11:38:47.121: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-a,UID:d2862030-8944-11ea-99e8-0242ac110002,ResourceVersion:7642927,Generation:0,CreationTimestamp:2020-04-28 11:38:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 28 11:38:47.122: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-a,UID:d2862030-8944-11ea-99e8-0242ac110002,ResourceVersion:7642927,Generation:0,CreationTimestamp:2020-04-28 11:38:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 28 11:38:57.130: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-a,UID:d2862030-8944-11ea-99e8-0242ac110002,ResourceVersion:7642947,Generation:0,CreationTimestamp:2020-04-28 11:38:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 28 11:38:57.130: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-a,UID:d2862030-8944-11ea-99e8-0242ac110002,ResourceVersion:7642947,Generation:0,CreationTimestamp:2020-04-28 11:38:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 28 11:39:07.138: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-a,UID:d2862030-8944-11ea-99e8-0242ac110002,ResourceVersion:7642967,Generation:0,CreationTimestamp:2020-04-28 11:38:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 28 11:39:07.138: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-a,UID:d2862030-8944-11ea-99e8-0242ac110002,ResourceVersion:7642967,Generation:0,CreationTimestamp:2020-04-28 11:38:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 28 11:39:17.145: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-a,UID:d2862030-8944-11ea-99e8-0242ac110002,ResourceVersion:7642987,Generation:0,CreationTimestamp:2020-04-28 11:38:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 28 11:39:17.145: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-a,UID:d2862030-8944-11ea-99e8-0242ac110002,ResourceVersion:7642987,Generation:0,CreationTimestamp:2020-04-28 11:38:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 28 11:39:27.151: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-b,UID:ea64a727-8944-11ea-99e8-0242ac110002,ResourceVersion:7643007,Generation:0,CreationTimestamp:2020-04-28 11:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 28 11:39:27.151: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-b,UID:ea64a727-8944-11ea-99e8-0242ac110002,ResourceVersion:7643007,Generation:0,CreationTimestamp:2020-04-28 11:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 28 11:39:37.180: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-b,UID:ea64a727-8944-11ea-99e8-0242ac110002,ResourceVersion:7643027,Generation:0,CreationTimestamp:2020-04-28 11:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 28 11:39:37.180: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7kkjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-7kkjn/configmaps/e2e-watch-test-configmap-b,UID:ea64a727-8944-11ea-99e8-0242ac110002,ResourceVersion:7643027,Generation:0,CreationTimestamp:2020-04-28 11:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:39:47.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-7kkjn" for this suite. Apr 28 11:39:53.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:39:53.306: INFO: namespace: e2e-tests-watch-7kkjn, resource: bindings, ignored listing per whitelist Apr 28 11:39:53.354: INFO: namespace e2e-tests-watch-7kkjn deletion completed in 6.164587744s • [SLOW TEST:66.351 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:39:53.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-fa0f3979-8944-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 11:39:53.438: INFO: Waiting up to 5m0s for pod "pod-secrets-fa0f9fde-8944-11ea-80e8-0242ac11000f" in namespace "e2e-tests-secrets-8cqdv" to be "success or failure" Apr 28 11:39:53.442: INFO: Pod "pod-secrets-fa0f9fde-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184593ms Apr 28 11:39:55.446: INFO: Pod "pod-secrets-fa0f9fde-8944-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008441563s Apr 28 11:39:57.451: INFO: Pod "pod-secrets-fa0f9fde-8944-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012846566s STEP: Saw pod success Apr 28 11:39:57.451: INFO: Pod "pod-secrets-fa0f9fde-8944-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:39:57.454: INFO: Trying to get logs from node hunter-worker pod pod-secrets-fa0f9fde-8944-11ea-80e8-0242ac11000f container secret-volume-test: STEP: delete the pod Apr 28 11:39:57.473: INFO: Waiting for pod pod-secrets-fa0f9fde-8944-11ea-80e8-0242ac11000f to disappear Apr 28 11:39:57.478: INFO: Pod pod-secrets-fa0f9fde-8944-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:39:57.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8cqdv" for this suite. Apr 28 11:40:03.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:40:03.504: INFO: namespace: e2e-tests-secrets-8cqdv, resource: bindings, ignored listing per whitelist Apr 28 11:40:03.570: INFO: namespace e2e-tests-secrets-8cqdv deletion completed in 6.08951299s • [SLOW TEST:10.216 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:40:03.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 28 11:40:03.689: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 28 11:40:08.700: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:40:09.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-b8tg2" for this suite. Apr 28 11:40:15.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:40:15.812: INFO: namespace: e2e-tests-replication-controller-b8tg2, resource: bindings, ignored listing per whitelist Apr 28 11:40:15.866: INFO: namespace e2e-tests-replication-controller-b8tg2 deletion completed in 6.151730244s • [SLOW TEST:12.296 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:40:15.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Apr 28 11:40:16.119: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:40:16.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7p52v" for this suite. Apr 28 11:40:22.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:40:22.262: INFO: namespace: e2e-tests-kubectl-7p52v, resource: bindings, ignored listing per whitelist Apr 28 11:40:22.331: INFO: namespace e2e-tests-kubectl-7p52v deletion completed in 6.122951396s • [SLOW TEST:6.465 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:40:22.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:40:22.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-pbvfh" for this suite. Apr 28 11:40:44.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:40:44.590: INFO: namespace: e2e-tests-pods-pbvfh, resource: bindings, ignored listing per whitelist Apr 28 11:40:44.645: INFO: namespace e2e-tests-pods-pbvfh deletion completed in 22.159757011s • [SLOW TEST:22.314 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:40:44.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Apr 28 11:40:48.826: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:41:12.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-qwv5t" for this suite. Apr 28 11:41:18.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:41:19.001: INFO: namespace: e2e-tests-namespaces-qwv5t, resource: bindings, ignored listing per whitelist Apr 28 11:41:19.031: INFO: namespace e2e-tests-namespaces-qwv5t deletion completed in 6.102323374s STEP: Destroying namespace "e2e-tests-nsdeletetest-kpzsg" for this suite. Apr 28 11:41:19.033: INFO: Namespace e2e-tests-nsdeletetest-kpzsg was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-p9c8p" for this suite. Apr 28 11:41:25.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:41:25.103: INFO: namespace: e2e-tests-nsdeletetest-p9c8p, resource: bindings, ignored listing per whitelist Apr 28 11:41:25.131: INFO: namespace e2e-tests-nsdeletetest-p9c8p deletion completed in 6.09737875s • [SLOW TEST:40.485 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:41:25.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 28 11:41:25.247: INFO: Waiting up to 5m0s for pod "pod-30c38fae-8945-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-bz2th" to be "success or failure" Apr 28 11:41:25.276: INFO: Pod "pod-30c38fae-8945-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.218306ms Apr 28 11:41:27.279: INFO: Pod "pod-30c38fae-8945-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032771545s Apr 28 11:41:29.284: INFO: Pod "pod-30c38fae-8945-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036950136s STEP: Saw pod success Apr 28 11:41:29.284: INFO: Pod "pod-30c38fae-8945-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:41:29.286: INFO: Trying to get logs from node hunter-worker pod pod-30c38fae-8945-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:41:29.317: INFO: Waiting for pod pod-30c38fae-8945-11ea-80e8-0242ac11000f to disappear Apr 28 11:41:29.327: INFO: Pod pod-30c38fae-8945-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:41:29.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bz2th" for this suite. Apr 28 11:41:35.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:41:35.393: INFO: namespace: e2e-tests-emptydir-bz2th, resource: bindings, ignored listing per whitelist Apr 28 11:41:35.412: INFO: namespace e2e-tests-emptydir-bz2th deletion completed in 6.081890051s • [SLOW TEST:10.281 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:41:35.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 11:41:35.488: INFO: Creating ReplicaSet my-hostname-basic-36e3f98a-8945-11ea-80e8-0242ac11000f Apr 28 11:41:35.538: INFO: Pod name my-hostname-basic-36e3f98a-8945-11ea-80e8-0242ac11000f: Found 0 pods out of 1 Apr 28 11:41:40.541: INFO: Pod name my-hostname-basic-36e3f98a-8945-11ea-80e8-0242ac11000f: Found 1 pods out of 1 Apr 28 11:41:40.541: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-36e3f98a-8945-11ea-80e8-0242ac11000f" is running Apr 28 11:41:40.544: INFO: Pod "my-hostname-basic-36e3f98a-8945-11ea-80e8-0242ac11000f-ss25p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 11:41:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 11:41:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 11:41:38 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 11:41:35 +0000 UTC Reason: Message:}]) Apr 28 11:41:40.544: INFO: Trying to dial the pod Apr 28 11:41:45.557: INFO: Controller my-hostname-basic-36e3f98a-8945-11ea-80e8-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-36e3f98a-8945-11ea-80e8-0242ac11000f-ss25p]: "my-hostname-basic-36e3f98a-8945-11ea-80e8-0242ac11000f-ss25p", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:41:45.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-mvtfm" for this suite. Apr 28 11:41:51.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:41:51.643: INFO: namespace: e2e-tests-replicaset-mvtfm, resource: bindings, ignored listing per whitelist Apr 28 11:41:51.667: INFO: namespace e2e-tests-replicaset-mvtfm deletion completed in 6.107010414s • [SLOW TEST:16.255 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:41:51.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-pmhm STEP: Creating a pod to test atomic-volume-subpath Apr 28 11:41:51.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pmhm" in namespace "e2e-tests-subpath-sr5sq" to be "success or failure" Apr 28 11:41:51.822: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Pending", Reason="", readiness=false. Elapsed: 25.138282ms Apr 28 11:41:53.837: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040305587s Apr 28 11:41:55.841: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044087883s Apr 28 11:41:57.888: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Running", Reason="", readiness=true. Elapsed: 6.090663259s Apr 28 11:41:59.892: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Running", Reason="", readiness=false. Elapsed: 8.094599254s Apr 28 11:42:01.909: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Running", Reason="", readiness=false. Elapsed: 10.112368814s Apr 28 11:42:03.915: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Running", Reason="", readiness=false. Elapsed: 12.118262442s Apr 28 11:42:05.921: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Running", Reason="", readiness=false. Elapsed: 14.124212325s Apr 28 11:42:07.945: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Running", Reason="", readiness=false. Elapsed: 16.148216356s Apr 28 11:42:09.957: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Running", Reason="", readiness=false. Elapsed: 18.160221291s Apr 28 11:42:11.970: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Running", Reason="", readiness=false. Elapsed: 20.172728314s Apr 28 11:42:13.973: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Running", Reason="", readiness=false. Elapsed: 22.176239697s Apr 28 11:42:15.978: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Running", Reason="", readiness=false. Elapsed: 24.180542359s Apr 28 11:42:17.982: INFO: Pod "pod-subpath-test-projected-pmhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.185087953s STEP: Saw pod success Apr 28 11:42:17.982: INFO: Pod "pod-subpath-test-projected-pmhm" satisfied condition "success or failure" Apr 28 11:42:17.985: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-pmhm container test-container-subpath-projected-pmhm: STEP: delete the pod Apr 28 11:42:18.004: INFO: Waiting for pod pod-subpath-test-projected-pmhm to disappear Apr 28 11:42:18.020: INFO: Pod pod-subpath-test-projected-pmhm no longer exists STEP: Deleting pod pod-subpath-test-projected-pmhm Apr 28 11:42:18.020: INFO: Deleting pod "pod-subpath-test-projected-pmhm" in namespace "e2e-tests-subpath-sr5sq" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:42:18.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-sr5sq" for this suite. Apr 28 11:42:24.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:42:24.276: INFO: namespace: e2e-tests-subpath-sr5sq, resource: bindings, ignored listing per whitelist Apr 28 11:42:24.290: INFO: namespace e2e-tests-subpath-sr5sq deletion completed in 6.264177766s • [SLOW TEST:32.622 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:42:24.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-540ab438-8945-11ea-80e8-0242ac11000f STEP: Creating secret with name s-test-opt-upd-540ab4ba-8945-11ea-80e8-0242ac11000f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-540ab438-8945-11ea-80e8-0242ac11000f STEP: Updating secret s-test-opt-upd-540ab4ba-8945-11ea-80e8-0242ac11000f STEP: Creating secret with name s-test-opt-create-540ab4ef-8945-11ea-80e8-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:42:34.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-btjxb" for this suite. Apr 28 11:42:56.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:42:56.671: INFO: namespace: e2e-tests-projected-btjxb, resource: bindings, ignored listing per whitelist Apr 28 11:42:56.682: INFO: namespace e2e-tests-projected-btjxb deletion completed in 22.092647684s • [SLOW TEST:32.392 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:42:56.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-675d7735-8945-11ea-80e8-0242ac11000f STEP: Creating configMap with name cm-test-opt-upd-675d77a1-8945-11ea-80e8-0242ac11000f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-675d7735-8945-11ea-80e8-0242ac11000f STEP: Updating configmap cm-test-opt-upd-675d77a1-8945-11ea-80e8-0242ac11000f STEP: Creating configMap with name cm-test-opt-create-675d78dd-8945-11ea-80e8-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:43:04.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r976d" for this suite. Apr 28 11:43:22.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:43:23.015: INFO: namespace: e2e-tests-projected-r976d, resource: bindings, ignored listing per whitelist Apr 28 11:43:23.038: INFO: namespace e2e-tests-projected-r976d deletion completed in 18.089508942s • [SLOW TEST:26.355 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:43:23.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Apr 28 11:43:23.170: INFO: Waiting up to 5m0s for pod "client-containers-771090ac-8945-11ea-80e8-0242ac11000f" in namespace "e2e-tests-containers-xpnqx" to be "success or failure" Apr 28 11:43:23.174: INFO: Pod "client-containers-771090ac-8945-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191093ms Apr 28 11:43:25.178: INFO: Pod "client-containers-771090ac-8945-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008014349s Apr 28 11:43:27.182: INFO: Pod "client-containers-771090ac-8945-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01164549s STEP: Saw pod success Apr 28 11:43:27.182: INFO: Pod "client-containers-771090ac-8945-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:43:27.184: INFO: Trying to get logs from node hunter-worker2 pod client-containers-771090ac-8945-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:43:27.199: INFO: Waiting for pod client-containers-771090ac-8945-11ea-80e8-0242ac11000f to disappear Apr 28 11:43:27.212: INFO: Pod client-containers-771090ac-8945-11ea-80e8-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:43:27.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-xpnqx" for this suite. Apr 28 11:43:33.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:43:33.315: INFO: namespace: e2e-tests-containers-xpnqx, resource: bindings, ignored listing per whitelist Apr 28 11:43:33.357: INFO: namespace e2e-tests-containers-xpnqx deletion completed in 6.087092808s • [SLOW TEST:10.320 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:43:33.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-pn759 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-pn759 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-pn759 Apr 28 11:43:33.479: INFO: Found 0 stateful pods, waiting for 1 Apr 28 11:43:43.484: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 28 11:43:43.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 11:43:43.790: INFO: stderr: "I0428 11:43:43.628210 1847 log.go:172] (0xc0006ea2c0) (0xc0006834a0) Create stream\nI0428 11:43:43.628272 1847 log.go:172] (0xc0006ea2c0) (0xc0006834a0) Stream added, broadcasting: 1\nI0428 11:43:43.630848 1847 log.go:172] (0xc0006ea2c0) Reply frame received for 1\nI0428 11:43:43.630906 1847 log.go:172] (0xc0006ea2c0) (0xc000606000) Create stream\nI0428 11:43:43.630973 1847 log.go:172] (0xc0006ea2c0) (0xc000606000) Stream added, broadcasting: 3\nI0428 11:43:43.631950 1847 log.go:172] (0xc0006ea2c0) Reply frame received for 3\nI0428 11:43:43.632001 1847 log.go:172] (0xc0006ea2c0) (0xc000312000) Create stream\nI0428 11:43:43.632017 1847 log.go:172] (0xc0006ea2c0) (0xc000312000) Stream added, broadcasting: 5\nI0428 11:43:43.633252 1847 log.go:172] (0xc0006ea2c0) Reply frame received for 5\nI0428 11:43:43.782314 1847 log.go:172] (0xc0006ea2c0) Data frame received for 3\nI0428 11:43:43.782346 1847 log.go:172] (0xc000606000) (3) Data frame handling\nI0428 11:43:43.782366 1847 log.go:172] (0xc000606000) (3) Data frame sent\nI0428 11:43:43.782377 1847 log.go:172] (0xc0006ea2c0) Data frame received for 3\nI0428 11:43:43.782388 1847 log.go:172] (0xc000606000) (3) Data frame handling\nI0428 11:43:43.782675 1847 log.go:172] (0xc0006ea2c0) Data frame received for 5\nI0428 11:43:43.782692 1847 log.go:172] (0xc000312000) (5) Data frame handling\nI0428 11:43:43.784612 1847 log.go:172] (0xc0006ea2c0) Data frame received for 1\nI0428 11:43:43.784639 1847 log.go:172] (0xc0006834a0) (1) Data frame handling\nI0428 11:43:43.784649 1847 log.go:172] (0xc0006834a0) (1) Data frame sent\nI0428 11:43:43.784728 1847 log.go:172] (0xc0006ea2c0) (0xc0006834a0) Stream removed, broadcasting: 1\nI0428 11:43:43.784849 1847 log.go:172] (0xc0006ea2c0) (0xc0006834a0) Stream removed, broadcasting: 1\nI0428 11:43:43.784859 1847 log.go:172] (0xc0006ea2c0) (0xc000606000) Stream removed, broadcasting: 3\nI0428 11:43:43.785068 1847 log.go:172] (0xc0006ea2c0) Go away received\nI0428 11:43:43.785428 1847 log.go:172] (0xc0006ea2c0) (0xc000312000) Stream removed, broadcasting: 5\n" Apr 28 11:43:43.790: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 11:43:43.790: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 11:43:43.794: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 28 11:43:53.799: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 11:43:53.799: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 11:43:53.815: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999939s Apr 28 11:43:54.819: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994779786s Apr 28 11:43:55.824: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990474503s Apr 28 11:43:56.827: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986233092s Apr 28 11:43:57.831: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.982496195s Apr 28 11:43:58.836: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.978686857s Apr 28 11:43:59.840: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974047477s Apr 28 11:44:00.844: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.97006005s Apr 28 11:44:01.848: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.96540694s Apr 28 11:44:02.853: INFO: Verifying statefulset ss doesn't scale past 1 for another 961.464479ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-pn759 Apr 28 11:44:03.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:44:04.060: INFO: stderr: "I0428 11:44:03.982913 1869 log.go:172] (0xc00014c840) (0xc000689360) Create stream\nI0428 11:44:03.982964 1869 log.go:172] (0xc00014c840) (0xc000689360) Stream added, broadcasting: 1\nI0428 11:44:03.984652 1869 log.go:172] (0xc00014c840) Reply frame received for 1\nI0428 11:44:03.984686 1869 log.go:172] (0xc00014c840) (0xc00075a000) Create stream\nI0428 11:44:03.984700 1869 log.go:172] (0xc00014c840) (0xc00075a000) Stream added, broadcasting: 3\nI0428 11:44:03.985573 1869 log.go:172] (0xc00014c840) Reply frame received for 3\nI0428 11:44:03.985614 1869 log.go:172] (0xc00014c840) (0xc0006f6000) Create stream\nI0428 11:44:03.985645 1869 log.go:172] (0xc00014c840) (0xc0006f6000) Stream added, broadcasting: 5\nI0428 11:44:03.986257 1869 log.go:172] (0xc00014c840) Reply frame received for 5\nI0428 11:44:04.052747 1869 log.go:172] (0xc00014c840) Data frame received for 3\nI0428 11:44:04.052794 1869 log.go:172] (0xc00075a000) (3) Data frame handling\nI0428 11:44:04.052817 1869 log.go:172] (0xc00075a000) (3) Data frame sent\nI0428 11:44:04.052839 1869 log.go:172] (0xc00014c840) Data frame received for 3\nI0428 11:44:04.052853 1869 log.go:172] (0xc00075a000) (3) Data frame handling\nI0428 11:44:04.052907 1869 log.go:172] (0xc00014c840) Data frame received for 5\nI0428 11:44:04.053041 1869 log.go:172] (0xc0006f6000) (5) Data frame handling\nI0428 11:44:04.054812 1869 log.go:172] (0xc00014c840) Data frame received for 1\nI0428 11:44:04.054834 1869 log.go:172] (0xc000689360) (1) Data frame handling\nI0428 11:44:04.054861 1869 log.go:172] (0xc000689360) (1) Data frame sent\nI0428 11:44:04.054879 1869 log.go:172] (0xc00014c840) (0xc000689360) Stream removed, broadcasting: 1\nI0428 11:44:04.054892 1869 log.go:172] (0xc00014c840) Go away received\nI0428 11:44:04.055187 1869 log.go:172] (0xc00014c840) (0xc000689360) Stream removed, broadcasting: 1\nI0428 11:44:04.055218 1869 log.go:172] (0xc00014c840) (0xc00075a000) Stream removed, broadcasting: 3\nI0428 11:44:04.055243 1869 log.go:172] (0xc00014c840) (0xc0006f6000) Stream removed, broadcasting: 5\n" Apr 28 11:44:04.060: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 11:44:04.060: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 11:44:04.063: INFO: Found 1 stateful pods, waiting for 3 Apr 28 11:44:14.069: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 11:44:14.069: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 11:44:14.069: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 28 11:44:14.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 11:44:14.304: INFO: stderr: "I0428 11:44:14.203830 1892 log.go:172] (0xc000714370) (0xc00073a640) Create stream\nI0428 11:44:14.203891 1892 log.go:172] (0xc000714370) (0xc00073a640) Stream added, broadcasting: 1\nI0428 11:44:14.206393 1892 log.go:172] (0xc000714370) Reply frame received for 1\nI0428 11:44:14.206443 1892 log.go:172] (0xc000714370) (0xc00073a6e0) Create stream\nI0428 11:44:14.206457 1892 log.go:172] (0xc000714370) (0xc00073a6e0) Stream added, broadcasting: 3\nI0428 11:44:14.207519 1892 log.go:172] (0xc000714370) Reply frame received for 3\nI0428 11:44:14.207569 1892 log.go:172] (0xc000714370) (0xc00073a780) Create stream\nI0428 11:44:14.207591 1892 log.go:172] (0xc000714370) (0xc00073a780) Stream added, broadcasting: 5\nI0428 11:44:14.208367 1892 log.go:172] (0xc000714370) Reply frame received for 5\nI0428 11:44:14.298397 1892 log.go:172] (0xc000714370) Data frame received for 3\nI0428 11:44:14.298449 1892 log.go:172] (0xc00073a6e0) (3) Data frame handling\nI0428 11:44:14.298468 1892 log.go:172] (0xc00073a6e0) (3) Data frame sent\nI0428 11:44:14.298483 1892 log.go:172] (0xc000714370) Data frame received for 3\nI0428 11:44:14.298498 1892 log.go:172] (0xc00073a6e0) (3) Data frame handling\nI0428 11:44:14.298534 1892 log.go:172] (0xc000714370) Data frame received for 5\nI0428 11:44:14.298556 1892 log.go:172] (0xc00073a780) (5) Data frame handling\nI0428 11:44:14.300414 1892 log.go:172] (0xc000714370) Data frame received for 1\nI0428 11:44:14.300450 1892 log.go:172] (0xc00073a640) (1) Data frame handling\nI0428 11:44:14.300468 1892 log.go:172] (0xc00073a640) (1) Data frame sent\nI0428 11:44:14.300493 1892 log.go:172] (0xc000714370) (0xc00073a640) Stream removed, broadcasting: 1\nI0428 11:44:14.300604 1892 log.go:172] (0xc000714370) Go away received\nI0428 11:44:14.300677 1892 log.go:172] (0xc000714370) (0xc00073a640) Stream removed, broadcasting: 1\nI0428 11:44:14.300710 1892 log.go:172] (0xc000714370) (0xc00073a6e0) Stream removed, broadcasting: 3\nI0428 11:44:14.300731 1892 log.go:172] (0xc000714370) (0xc00073a780) Stream removed, broadcasting: 5\n" Apr 28 11:44:14.305: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 11:44:14.305: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 11:44:14.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 11:44:14.568: INFO: stderr: "I0428 11:44:14.468090 1915 log.go:172] (0xc0003f8370) (0xc000706640) Create stream\nI0428 11:44:14.468144 1915 log.go:172] (0xc0003f8370) (0xc000706640) Stream added, broadcasting: 1\nI0428 11:44:14.470539 1915 log.go:172] (0xc0003f8370) Reply frame received for 1\nI0428 11:44:14.470582 1915 log.go:172] (0xc0003f8370) (0xc000588c80) Create stream\nI0428 11:44:14.470594 1915 log.go:172] (0xc0003f8370) (0xc000588c80) Stream added, broadcasting: 3\nI0428 11:44:14.471443 1915 log.go:172] (0xc0003f8370) Reply frame received for 3\nI0428 11:44:14.471470 1915 log.go:172] (0xc0003f8370) (0xc0006a2000) Create stream\nI0428 11:44:14.471479 1915 log.go:172] (0xc0003f8370) (0xc0006a2000) Stream added, broadcasting: 5\nI0428 11:44:14.472332 1915 log.go:172] (0xc0003f8370) Reply frame received for 5\nI0428 11:44:14.564311 1915 log.go:172] (0xc0003f8370) Data frame received for 5\nI0428 11:44:14.564354 1915 log.go:172] (0xc0006a2000) (5) Data frame handling\nI0428 11:44:14.564379 1915 log.go:172] (0xc0003f8370) Data frame received for 3\nI0428 11:44:14.564393 1915 log.go:172] (0xc000588c80) (3) Data frame handling\nI0428 11:44:14.564406 1915 log.go:172] (0xc000588c80) (3) Data frame sent\nI0428 11:44:14.564417 1915 log.go:172] (0xc0003f8370) Data frame received for 3\nI0428 11:44:14.564429 1915 log.go:172] (0xc000588c80) (3) Data frame handling\nI0428 11:44:14.565831 1915 log.go:172] (0xc0003f8370) Data frame received for 1\nI0428 11:44:14.565865 1915 log.go:172] (0xc000706640) (1) Data frame handling\nI0428 11:44:14.565875 1915 log.go:172] (0xc000706640) (1) Data frame sent\nI0428 11:44:14.565896 1915 log.go:172] (0xc0003f8370) (0xc000706640) Stream removed, broadcasting: 1\nI0428 11:44:14.565915 1915 log.go:172] (0xc0003f8370) Go away received\nI0428 11:44:14.566046 1915 log.go:172] (0xc0003f8370) (0xc000706640) Stream removed, broadcasting: 1\nI0428 11:44:14.566063 1915 log.go:172] (0xc0003f8370) (0xc000588c80) Stream removed, broadcasting: 3\nI0428 11:44:14.566073 1915 log.go:172] (0xc0003f8370) (0xc0006a2000) Stream removed, broadcasting: 5\n" Apr 28 11:44:14.568: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 11:44:14.568: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 11:44:14.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 11:44:14.802: INFO: stderr: "I0428 11:44:14.680584 1937 log.go:172] (0xc000686210) (0xc0008be500) Create stream\nI0428 11:44:14.680642 1937 log.go:172] (0xc000686210) (0xc0008be500) Stream added, broadcasting: 1\nI0428 11:44:14.682986 1937 log.go:172] (0xc000686210) Reply frame received for 1\nI0428 11:44:14.683034 1937 log.go:172] (0xc000686210) (0xc00054cb40) Create stream\nI0428 11:44:14.683052 1937 log.go:172] (0xc000686210) (0xc00054cb40) Stream added, broadcasting: 3\nI0428 11:44:14.683856 1937 log.go:172] (0xc000686210) Reply frame received for 3\nI0428 11:44:14.683898 1937 log.go:172] (0xc000686210) (0xc0008be5a0) Create stream\nI0428 11:44:14.683917 1937 log.go:172] (0xc000686210) (0xc0008be5a0) Stream added, broadcasting: 5\nI0428 11:44:14.684743 1937 log.go:172] (0xc000686210) Reply frame received for 5\nI0428 11:44:14.794596 1937 log.go:172] (0xc000686210) Data frame received for 3\nI0428 11:44:14.794627 1937 log.go:172] (0xc00054cb40) (3) Data frame handling\nI0428 11:44:14.794649 1937 log.go:172] (0xc00054cb40) (3) Data frame sent\nI0428 11:44:14.794658 1937 log.go:172] (0xc000686210) Data frame received for 3\nI0428 11:44:14.794664 1937 log.go:172] (0xc00054cb40) (3) Data frame handling\nI0428 11:44:14.794924 1937 log.go:172] (0xc000686210) Data frame received for 5\nI0428 11:44:14.794951 1937 log.go:172] (0xc0008be5a0) (5) Data frame handling\nI0428 11:44:14.796920 1937 log.go:172] (0xc000686210) Data frame received for 1\nI0428 11:44:14.796943 1937 log.go:172] (0xc0008be500) (1) Data frame handling\nI0428 11:44:14.796951 1937 log.go:172] (0xc0008be500) (1) Data frame sent\nI0428 11:44:14.796958 1937 log.go:172] (0xc000686210) (0xc0008be500) Stream removed, broadcasting: 1\nI0428 11:44:14.796970 1937 log.go:172] (0xc000686210) Go away received\nI0428 11:44:14.797224 1937 log.go:172] (0xc000686210) (0xc0008be500) Stream removed, broadcasting: 1\nI0428 11:44:14.797266 1937 log.go:172] (0xc000686210) (0xc00054cb40) Stream removed, broadcasting: 3\nI0428 11:44:14.797276 1937 log.go:172] (0xc000686210) (0xc0008be5a0) Stream removed, broadcasting: 5\n" Apr 28 11:44:14.802: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 11:44:14.802: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 11:44:14.802: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 11:44:14.821: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 28 11:44:24.828: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 11:44:24.828: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 28 11:44:24.828: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 28 11:44:24.840: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999502s Apr 28 11:44:25.845: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994638575s Apr 28 11:44:26.850: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988991485s Apr 28 11:44:27.855: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984487532s Apr 28 11:44:28.860: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978974065s Apr 28 11:44:29.864: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.974384561s Apr 28 11:44:30.870: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970037044s Apr 28 11:44:31.875: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964561068s Apr 28 11:44:32.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.959779993s Apr 28 11:44:33.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 955.891106ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-pn759 Apr 28 11:44:34.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:44:35.103: INFO: stderr: "I0428 11:44:35.011275 1958 log.go:172] (0xc000138630) (0xc0001a5360) Create stream\nI0428 11:44:35.011323 1958 log.go:172] (0xc000138630) (0xc0001a5360) Stream added, broadcasting: 1\nI0428 11:44:35.013375 1958 log.go:172] (0xc000138630) Reply frame received for 1\nI0428 11:44:35.013437 1958 log.go:172] (0xc000138630) (0xc0003e0000) Create stream\nI0428 11:44:35.013457 1958 log.go:172] (0xc000138630) (0xc0003e0000) Stream added, broadcasting: 3\nI0428 11:44:35.014275 1958 log.go:172] (0xc000138630) Reply frame received for 3\nI0428 11:44:35.014295 1958 log.go:172] (0xc000138630) (0xc0001a5400) Create stream\nI0428 11:44:35.014307 1958 log.go:172] (0xc000138630) (0xc0001a5400) Stream added, broadcasting: 5\nI0428 11:44:35.015081 1958 log.go:172] (0xc000138630) Reply frame received for 5\nI0428 11:44:35.097417 1958 log.go:172] (0xc000138630) Data frame received for 3\nI0428 11:44:35.097453 1958 log.go:172] (0xc0003e0000) (3) Data frame handling\nI0428 11:44:35.097461 1958 log.go:172] (0xc0003e0000) (3) Data frame sent\nI0428 11:44:35.097466 1958 log.go:172] (0xc000138630) Data frame received for 3\nI0428 11:44:35.097470 1958 log.go:172] (0xc0003e0000) (3) Data frame handling\nI0428 11:44:35.097494 1958 log.go:172] (0xc000138630) Data frame received for 5\nI0428 11:44:35.097499 1958 log.go:172] (0xc0001a5400) (5) Data frame handling\nI0428 11:44:35.098910 1958 log.go:172] (0xc000138630) Data frame received for 1\nI0428 11:44:35.098922 1958 log.go:172] (0xc0001a5360) (1) Data frame handling\nI0428 11:44:35.098929 1958 log.go:172] (0xc0001a5360) (1) Data frame sent\nI0428 11:44:35.098936 1958 log.go:172] (0xc000138630) (0xc0001a5360) Stream removed, broadcasting: 1\nI0428 11:44:35.099086 1958 log.go:172] (0xc000138630) (0xc0001a5360) Stream removed, broadcasting: 1\nI0428 11:44:35.099103 1958 log.go:172] (0xc000138630) (0xc0003e0000) Stream removed, broadcasting: 3\nI0428 11:44:35.099285 1958 log.go:172] (0xc000138630) Go away received\nI0428 11:44:35.099364 1958 log.go:172] (0xc000138630) (0xc0001a5400) Stream removed, broadcasting: 5\n" Apr 28 11:44:35.103: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 11:44:35.103: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 11:44:35.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:44:35.313: INFO: stderr: "I0428 11:44:35.236939 1979 log.go:172] (0xc000138840) (0xc000764640) Create stream\nI0428 11:44:35.236990 1979 log.go:172] (0xc000138840) (0xc000764640) Stream added, broadcasting: 1\nI0428 11:44:35.239084 1979 log.go:172] (0xc000138840) Reply frame received for 1\nI0428 11:44:35.239111 1979 log.go:172] (0xc000138840) (0xc000684c80) Create stream\nI0428 11:44:35.239120 1979 log.go:172] (0xc000138840) (0xc000684c80) Stream added, broadcasting: 3\nI0428 11:44:35.239753 1979 log.go:172] (0xc000138840) Reply frame received for 3\nI0428 11:44:35.239777 1979 log.go:172] (0xc000138840) (0xc000684dc0) Create stream\nI0428 11:44:35.239790 1979 log.go:172] (0xc000138840) (0xc000684dc0) Stream added, broadcasting: 5\nI0428 11:44:35.240558 1979 log.go:172] (0xc000138840) Reply frame received for 5\nI0428 11:44:35.306488 1979 log.go:172] (0xc000138840) Data frame received for 5\nI0428 11:44:35.306530 1979 log.go:172] (0xc000138840) Data frame received for 3\nI0428 11:44:35.306559 1979 log.go:172] (0xc000684c80) (3) Data frame handling\nI0428 11:44:35.306570 1979 log.go:172] (0xc000684c80) (3) Data frame sent\nI0428 11:44:35.306579 1979 log.go:172] (0xc000138840) Data frame received for 3\nI0428 11:44:35.306586 1979 log.go:172] (0xc000684c80) (3) Data frame handling\nI0428 11:44:35.306628 1979 log.go:172] (0xc000684dc0) (5) Data frame handling\nI0428 11:44:35.308180 1979 log.go:172] (0xc000138840) Data frame received for 1\nI0428 11:44:35.308194 1979 log.go:172] (0xc000764640) (1) Data frame handling\nI0428 11:44:35.308207 1979 log.go:172] (0xc000764640) (1) Data frame sent\nI0428 11:44:35.308391 1979 log.go:172] (0xc000138840) (0xc000764640) Stream removed, broadcasting: 1\nI0428 11:44:35.308424 1979 log.go:172] (0xc000138840) Go away received\nI0428 11:44:35.308724 1979 log.go:172] (0xc000138840) (0xc000764640) Stream removed, broadcasting: 1\nI0428 11:44:35.308749 1979 log.go:172] (0xc000138840) (0xc000684c80) Stream removed, broadcasting: 3\nI0428 11:44:35.308767 1979 log.go:172] (0xc000138840) (0xc000684dc0) Stream removed, broadcasting: 5\n" Apr 28 11:44:35.313: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 11:44:35.313: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 11:44:35.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:44:35.863: INFO: rc: 1 Apr 28 11:44:35.863: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] I0428 11:44:35.469330 2002 log.go:172] (0xc000138840) (0xc0005cb400) Create stream I0428 11:44:35.469400 2002 log.go:172] (0xc000138840) (0xc0005cb400) Stream added, broadcasting: 1 I0428 11:44:35.471220 2002 log.go:172] (0xc000138840) Reply frame received for 1 I0428 11:44:35.471261 2002 log.go:172] (0xc000138840) (0xc0005cb4a0) Create stream I0428 11:44:35.471291 2002 log.go:172] (0xc000138840) (0xc0005cb4a0) Stream added, broadcasting: 3 I0428 11:44:35.472192 2002 log.go:172] (0xc000138840) Reply frame received for 3 I0428 11:44:35.472234 2002 log.go:172] (0xc000138840) (0xc000742000) Create stream I0428 11:44:35.472267 2002 log.go:172] (0xc000138840) (0xc000742000) Stream added, broadcasting: 5 I0428 11:44:35.473333 2002 log.go:172] (0xc000138840) Reply frame received for 5 I0428 11:44:35.857977 2002 log.go:172] (0xc000138840) Data frame received for 1 I0428 11:44:35.858025 2002 log.go:172] (0xc0005cb400) (1) Data frame handling I0428 11:44:35.858046 2002 log.go:172] (0xc0005cb400) (1) Data frame sent I0428 11:44:35.858086 2002 log.go:172] (0xc000138840) (0xc000742000) Stream removed, broadcasting: 5 I0428 11:44:35.858135 2002 log.go:172] (0xc000138840) (0xc0005cb4a0) Stream removed, broadcasting: 3 I0428 11:44:35.858175 2002 log.go:172] (0xc000138840) (0xc0005cb400) Stream removed, broadcasting: 1 I0428 11:44:35.858211 2002 log.go:172] (0xc000138840) Go away received I0428 11:44:35.858515 2002 log.go:172] (0xc000138840) (0xc0005cb400) Stream removed, broadcasting: 1 I0428 11:44:35.858541 2002 log.go:172] (0xc000138840) (0xc0005cb4a0) Stream removed, broadcasting: 3 I0428 11:44:35.858550 2002 log.go:172] (0xc000138840) (0xc000742000) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "f56d320e8e5a6717116fa205288fe56aa6e3e13b2c8c821c23407bb41ab2cce6": task 1380059d9982742f147ff9eaa00e9c91e9cffe2a31676fe4d83fab06ee841ba6 not found: not found [] 0xc00161b5f0 exit status 1 true [0xc000ca2810 0xc000ca2850 0xc000ca2898] [0xc000ca2810 0xc000ca2850 0xc000ca2898] [0xc000ca2828 0xc000ca2878] [0x935700 0x935700] 0xc002562a80 }: Command stdout: stderr: I0428 11:44:35.469330 2002 log.go:172] (0xc000138840) (0xc0005cb400) Create stream I0428 11:44:35.469400 2002 log.go:172] (0xc000138840) (0xc0005cb400) Stream added, broadcasting: 1 I0428 11:44:35.471220 2002 log.go:172] (0xc000138840) Reply frame received for 1 I0428 11:44:35.471261 2002 log.go:172] (0xc000138840) (0xc0005cb4a0) Create stream I0428 11:44:35.471291 2002 log.go:172] (0xc000138840) (0xc0005cb4a0) Stream added, broadcasting: 3 I0428 11:44:35.472192 2002 log.go:172] (0xc000138840) Reply frame received for 3 I0428 11:44:35.472234 2002 log.go:172] (0xc000138840) (0xc000742000) Create stream I0428 11:44:35.472267 2002 log.go:172] (0xc000138840) (0xc000742000) Stream added, broadcasting: 5 I0428 11:44:35.473333 2002 log.go:172] (0xc000138840) Reply frame received for 5 I0428 11:44:35.857977 2002 log.go:172] (0xc000138840) Data frame received for 1 I0428 11:44:35.858025 2002 log.go:172] (0xc0005cb400) (1) Data frame handling I0428 11:44:35.858046 2002 log.go:172] (0xc0005cb400) (1) Data frame sent I0428 11:44:35.858086 2002 log.go:172] (0xc000138840) (0xc000742000) Stream removed, broadcasting: 5 I0428 11:44:35.858135 2002 log.go:172] (0xc000138840) (0xc0005cb4a0) Stream removed, broadcasting: 3 I0428 11:44:35.858175 2002 log.go:172] (0xc000138840) (0xc0005cb400) Stream removed, broadcasting: 1 I0428 11:44:35.858211 2002 log.go:172] (0xc000138840) Go away received I0428 11:44:35.858515 2002 log.go:172] (0xc000138840) (0xc0005cb400) Stream removed, broadcasting: 1 I0428 11:44:35.858541 2002 log.go:172] (0xc000138840) (0xc0005cb4a0) Stream removed, broadcasting: 3 I0428 11:44:35.858550 2002 log.go:172] (0xc000138840) (0xc000742000) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "f56d320e8e5a6717116fa205288fe56aa6e3e13b2c8c821c23407bb41ab2cce6": task 1380059d9982742f147ff9eaa00e9c91e9cffe2a31676fe4d83fab06ee841ba6 not found: not found error: exit status 1 Apr 28 11:44:45.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:44:45.959: INFO: rc: 1 Apr 28 11:44:45.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00161b740 exit status 1 true [0xc000ca28d0 0xc000ca2970 0xc000ca2ad0] [0xc000ca28d0 0xc000ca2970 0xc000ca2ad0] [0xc000ca2950 0xc000ca2a98] [0x935700 0x935700] 0xc002562d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:44:55.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:44:56.061: INFO: rc: 1 Apr 28 11:44:56.061: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001deb470 exit status 1 true [0xc000716758 0xc0007167a8 0xc0007167e8] [0xc000716758 0xc0007167a8 0xc0007167e8] [0xc000716778 0xc0007167e0] [0x935700 0x935700] 0xc00160f0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:45:06.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:45:06.156: INFO: rc: 1 Apr 28 11:45:06.156: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001db6d50 exit status 1 true [0xc0003533e0 0xc000353428 0xc000353458] [0xc0003533e0 0xc000353428 0xc000353458] [0xc000353408 0xc000353448] [0x935700 0x935700] 0xc0022222a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:45:16.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:45:16.262: INFO: rc: 1 Apr 28 11:45:16.262: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001db6e70 exit status 1 true [0xc000353468 0xc0003534f8 0xc000353540] [0xc000353468 0xc0003534f8 0xc000353540] [0xc0003534d0 0xc000353528] [0x935700 0x935700] 0xc002222a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:45:26.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:45:26.354: INFO: rc: 1 Apr 28 11:45:26.354: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001db6fc0 exit status 1 true [0xc000353560 0xc0003535a8 0xc000353618] [0xc000353560 0xc0003535a8 0xc000353618] [0xc000353598 0xc0003535f0] [0x935700 0x935700] 0xc002223620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:45:36.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:45:36.442: INFO: rc: 1 Apr 28 11:45:36.442: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00161b890 exit status 1 true [0xc000ca2ad8 0xc000ca2b80 0xc000ca2c98] [0xc000ca2ad8 0xc000ca2b80 0xc000ca2c98] [0xc000ca2b78 0xc000ca2c08] [0x935700 0x935700] 0xc002562fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:45:46.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:45:46.532: INFO: rc: 1 Apr 28 11:45:46.532: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001deb5c0 exit status 1 true [0xc0007167f0 0xc000716858 0xc000716898] [0xc0007167f0 0xc000716858 0xc000716898] [0xc000716808 0xc000716880] [0x935700 0x935700] 0xc00160fc20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:45:56.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:45:56.616: INFO: rc: 1 Apr 28 11:45:56.616: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0010ffb90 exit status 1 true [0xc0015de008 0xc0015de020 0xc0015de038] [0xc0015de008 0xc0015de020 0xc0015de038] [0xc0015de018 0xc0015de030] [0x935700 0x935700] 0xc001be41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:46:06.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:46:06.707: INFO: rc: 1 Apr 28 11:46:06.707: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001508120 exit status 1 true [0xc000551ed0 0xc00000e1c8 0xc000352e38] [0xc000551ed0 0xc00000e1c8 0xc000352e38] [0xc00000e188 0xc000352db8] [0x935700 0x935700] 0xc0026f41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:46:16.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:46:16.803: INFO: rc: 1 Apr 28 11:46:16.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00166a0f0 exit status 1 true [0xc0015de040 0xc0015de058 0xc0015de070] [0xc0015de040 0xc0015de058 0xc0015de070] [0xc0015de050 0xc0015de068] [0x935700 0x935700] 0xc002a9c720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:46:26.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:46:26.894: INFO: rc: 1 Apr 28 11:46:26.894: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000dd8150 exit status 1 true [0xc000ca2048 0xc000ca2130 0xc000ca2248] [0xc000ca2048 0xc000ca2130 0xc000ca2248] [0xc000ca20e0 0xc000ca2200] [0x935700 0x935700] 0xc0017a4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:46:36.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:46:36.980: INFO: rc: 1 Apr 28 11:46:36.980: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015082d0 exit status 1 true [0xc000352e58 0xc000352ee8 0xc000352fb8] [0xc000352e58 0xc000352ee8 0xc000352fb8] [0xc000352ed8 0xc000352f90] [0x935700 0x935700] 0xc0026f4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:46:46.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:46:47.079: INFO: rc: 1 Apr 28 11:46:47.079: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a40180 exit status 1 true [0xc000716050 0xc000716098 0xc0007160d8] [0xc000716050 0xc000716098 0xc0007160d8] [0xc000716090 0xc0007160c0] [0x935700 0x935700] 0xc0022222a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:46:57.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:46:57.181: INFO: rc: 1 Apr 28 11:46:57.182: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001508450 exit status 1 true [0xc000352fc0 0xc000352ff8 0xc000353040] [0xc000352fc0 0xc000352ff8 0xc000353040] [0xc000352ff0 0xc000353028] [0x935700 0x935700] 0xc0026f4ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:47:07.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:47:07.282: INFO: rc: 1 Apr 28 11:47:07.282: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a402a0 exit status 1 true [0xc0007160e0 0xc000716170 0xc000716300] [0xc0007160e0 0xc000716170 0xc000716300] [0xc000716108 0xc0007161d0] [0x935700 0x935700] 0xc002222a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:47:17.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:47:17.374: INFO: rc: 1 Apr 28 11:47:17.374: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a403c0 exit status 1 true [0xc000716318 0xc000716378 0xc0007163f8] [0xc000716318 0xc000716378 0xc0007163f8] [0xc000716358 0xc0007163b0] [0x935700 0x935700] 0xc002223620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:47:27.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:47:27.450: INFO: rc: 1 Apr 28 11:47:27.450: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00166a240 exit status 1 true [0xc0015de078 0xc0015de090 0xc0015de0a8] [0xc0015de078 0xc0015de090 0xc0015de0a8] [0xc0015de088 0xc0015de0a0] [0x935700 0x935700] 0xc002a9c9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:47:37.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:47:37.531: INFO: rc: 1 Apr 28 11:47:37.531: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000dd8300 exit status 1 true [0xc000ca2258 0xc000ca22c0 0xc000ca23c0] [0xc000ca2258 0xc000ca22c0 0xc000ca23c0] [0xc000ca22a0 0xc000ca2328] [0x935700 0x935700] 0xc0017a4b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:47:47.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:47:47.619: INFO: rc: 1 Apr 28 11:47:47.619: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015085d0 exit status 1 true [0xc000353048 0xc0003530c0 0xc000353128] [0xc000353048 0xc0003530c0 0xc000353128] [0xc000353090 0xc000353100] [0x935700 0x935700] 0xc0026f5800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:47:57.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:47:57.711: INFO: rc: 1 Apr 28 11:47:57.712: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00166a150 exit status 1 true [0xc00000e1c8 0xc0015de000 0xc0015de018] [0xc00000e1c8 0xc0015de000 0xc0015de018] [0xc000551ff8 0xc0015de010] [0x935700 0x935700] 0xc002a9c720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:48:07.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:48:07.799: INFO: rc: 1 Apr 28 11:48:07.799: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a40120 exit status 1 true [0xc000716050 0xc000716098 0xc0007160d8] [0xc000716050 0xc000716098 0xc0007160d8] [0xc000716090 0xc0007160c0] [0x935700 0x935700] 0xc0022222a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:48:17.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:48:17.890: INFO: rc: 1 Apr 28 11:48:17.890: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000dd8120 exit status 1 true [0xc000352d50 0xc000352e58 0xc000352ee8] [0xc000352d50 0xc000352e58 0xc000352ee8] [0xc000352e38 0xc000352ed8] [0x935700 0x935700] 0xc0026f41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:48:27.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:48:27.986: INFO: rc: 1 Apr 28 11:48:27.986: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00166a2d0 exit status 1 true [0xc0015de020 0xc0015de038 0xc0015de050] [0xc0015de020 0xc0015de038 0xc0015de050] [0xc0015de030 0xc0015de048] [0x935700 0x935700] 0xc002a9c9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:48:37.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:48:38.091: INFO: rc: 1 Apr 28 11:48:38.091: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00166a540 exit status 1 true [0xc0015de058 0xc0015de070 0xc0015de088] [0xc0015de058 0xc0015de070 0xc0015de088] [0xc0015de068 0xc0015de080] [0x935700 0x935700] 0xc002a9ccc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:48:48.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:48:48.173: INFO: rc: 1 Apr 28 11:48:48.173: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a40270 exit status 1 true [0xc0007160e0 0xc000716170 0xc000716300] [0xc0007160e0 0xc000716170 0xc000716300] [0xc000716108 0xc0007161d0] [0x935700 0x935700] 0xc002222a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:48:58.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:48:58.267: INFO: rc: 1 Apr 28 11:48:58.267: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00166a6f0 exit status 1 true [0xc0015de090 0xc0015de0a8 0xc0015de0c0] [0xc0015de090 0xc0015de0a8 0xc0015de0c0] [0xc0015de0a0 0xc0015de0b8] [0x935700 0x935700] 0xc002a9cf60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:49:08.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:49:08.355: INFO: rc: 1 Apr 28 11:49:08.355: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015081e0 exit status 1 true [0xc000ca2048 0xc000ca2130 0xc000ca2248] [0xc000ca2048 0xc000ca2130 0xc000ca2248] [0xc000ca20e0 0xc000ca2200] [0x935700 0x935700] 0xc0017a4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:49:18.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:49:18.445: INFO: rc: 1 Apr 28 11:49:18.445: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a403f0 exit status 1 true [0xc000716318 0xc000716378 0xc0007163f8] [0xc000716318 0xc000716378 0xc0007163f8] [0xc000716358 0xc0007163b0] [0x935700 0x935700] 0xc002223620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:49:28.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:49:28.535: INFO: rc: 1 Apr 28 11:49:28.535: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a40540 exit status 1 true [0xc000716450 0xc000716490 0xc0007164a8] [0xc000716450 0xc000716490 0xc0007164a8] [0xc000716488 0xc0007164a0] [0x935700 0x935700] 0xc0022238c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 11:49:38.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn759 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 11:49:38.633: INFO: rc: 1 Apr 28 11:49:38.633: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Apr 28 11:49:38.633: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Apr 28 11:49:38.643: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pn759 Apr 28 11:49:38.646: INFO: Scaling statefulset ss to 0 Apr 28 11:49:38.655: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 11:49:38.657: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:49:38.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-pn759" for this suite. Apr 28 11:49:44.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:49:44.729: INFO: namespace: e2e-tests-statefulset-pn759, resource: bindings, ignored listing per whitelist Apr 28 11:49:44.771: INFO: namespace e2e-tests-statefulset-pn759 deletion completed in 6.101326565s • [SLOW TEST:371.413 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:49:44.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-5a985d60-8946-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 11:49:44.904: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5a98da17-8946-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-wj5t7" to be "success or failure" Apr 28 11:49:44.911: INFO: Pod "pod-projected-secrets-5a98da17-8946-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052846ms Apr 28 11:49:46.929: INFO: Pod "pod-projected-secrets-5a98da17-8946-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024635425s Apr 28 11:49:48.933: INFO: Pod "pod-projected-secrets-5a98da17-8946-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027976507s STEP: Saw pod success Apr 28 11:49:48.933: INFO: Pod "pod-projected-secrets-5a98da17-8946-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:49:48.935: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-5a98da17-8946-11ea-80e8-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Apr 28 11:49:48.972: INFO: Waiting for pod pod-projected-secrets-5a98da17-8946-11ea-80e8-0242ac11000f to disappear Apr 28 11:49:48.976: INFO: Pod pod-projected-secrets-5a98da17-8946-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:49:48.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wj5t7" for this suite. Apr 28 11:49:55.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:49:55.109: INFO: namespace: e2e-tests-projected-wj5t7, resource: bindings, ignored listing per whitelist Apr 28 11:49:55.115: INFO: namespace e2e-tests-projected-wj5t7 deletion completed in 6.135965797s • [SLOW TEST:10.343 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:49:55.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-2l5f8 Apr 28 11:49:59.269: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-2l5f8 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 11:49:59.273: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:53:59.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-2l5f8" for this suite. Apr 28 11:54:05.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:54:06.002: INFO: namespace: e2e-tests-container-probe-2l5f8, resource: bindings, ignored listing per whitelist Apr 28 11:54:06.057: INFO: namespace e2e-tests-container-probe-2l5f8 deletion completed in 6.09490621s • [SLOW TEST:250.942 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:54:06.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 28 11:54:06.179: INFO: Waiting up to 5m0s for pod "pod-f650b9a3-8946-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-vwzpf" to be "success or failure" Apr 28 11:54:06.184: INFO: Pod "pod-f650b9a3-8946-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119139ms Apr 28 11:54:08.188: INFO: Pod "pod-f650b9a3-8946-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008432863s Apr 28 11:54:10.192: INFO: Pod "pod-f650b9a3-8946-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012137193s STEP: Saw pod success Apr 28 11:54:10.192: INFO: Pod "pod-f650b9a3-8946-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:54:10.213: INFO: Trying to get logs from node hunter-worker2 pod pod-f650b9a3-8946-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 11:54:10.233: INFO: Waiting for pod pod-f650b9a3-8946-11ea-80e8-0242ac11000f to disappear Apr 28 11:54:10.237: INFO: Pod pod-f650b9a3-8946-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:54:10.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vwzpf" for this suite. Apr 28 11:54:16.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:54:16.298: INFO: namespace: e2e-tests-emptydir-vwzpf, resource: bindings, ignored listing per whitelist Apr 28 11:54:16.329: INFO: namespace e2e-tests-emptydir-vwzpf deletion completed in 6.088603987s • [SLOW TEST:10.272 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:54:16.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 11:54:16.448: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 28 11:54:16.455: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 28 11:54:21.460: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 28 11:54:21.460: INFO: Creating deployment "test-rolling-update-deployment" Apr 28 11:54:21.463: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 28 11:54:21.527: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 28 11:54:23.661: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 28 11:54:23.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723671661, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723671661, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723671661, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723671661, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 11:54:25.669: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Apr 28 11:54:25.679: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-7xb2k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7xb2k/deployments/test-rolling-update-deployment,UID:ff723e7d-8946-11ea-99e8-0242ac110002,ResourceVersion:7645414,Generation:1,CreationTimestamp:2020-04-28 11:54:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-28 11:54:21 +0000 UTC 2020-04-28 11:54:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-28 11:54:25 +0000 UTC 2020-04-28 11:54:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 28 11:54:25.682: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-7xb2k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7xb2k/replicasets/test-rolling-update-deployment-75db98fb4c,UID:ff7d1fee-8946-11ea-99e8-0242ac110002,ResourceVersion:7645404,Generation:1,CreationTimestamp:2020-04-28 11:54:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ff723e7d-8946-11ea-99e8-0242ac110002 0xc001b61c77 0xc001b61c78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 28 11:54:25.682: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 28 11:54:25.682: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-7xb2k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7xb2k/replicasets/test-rolling-update-controller,UID:fc75764b-8946-11ea-99e8-0242ac110002,ResourceVersion:7645413,Generation:2,CreationTimestamp:2020-04-28 11:54:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ff723e7d-8946-11ea-99e8-0242ac110002 0xc001b61b87 0xc001b61b88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 11:54:25.685: INFO: Pod "test-rolling-update-deployment-75db98fb4c-l7s2h" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-l7s2h,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-7xb2k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7xb2k/pods/test-rolling-update-deployment-75db98fb4c-l7s2h,UID:ff7e4ba8-8946-11ea-99e8-0242ac110002,ResourceVersion:7645403,Generation:0,CreationTimestamp:2020-04-28 11:54:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c ff7d1fee-8946-11ea-99e8-0242ac110002 0xc001d1cb37 0xc001d1cb38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xlnzq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xlnzq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-xlnzq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d1cc10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d1cc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:54:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:54:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:54:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 11:54:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.203,StartTime:2020-04-28 11:54:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-28 11:54:24 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://b8314ebbda6f99957af9b5ab0c97a9227b3f2725a44f7293303cd60ccab5c46a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:54:25.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-7xb2k" for this suite. Apr 28 11:54:31.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:54:31.777: INFO: namespace: e2e-tests-deployment-7xb2k, resource: bindings, ignored listing per whitelist Apr 28 11:54:31.831: INFO: namespace e2e-tests-deployment-7xb2k deletion completed in 6.143352308s • [SLOW TEST:15.501 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:54:31.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-05ae6f08-8947-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 11:54:31.935: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-05af2de7-8947-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-vwq2m" to be "success or failure" Apr 28 11:54:31.952: INFO: Pod "pod-projected-configmaps-05af2de7-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.026544ms Apr 28 11:54:33.956: INFO: Pod "pod-projected-configmaps-05af2de7-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021460728s Apr 28 11:54:35.961: INFO: Pod "pod-projected-configmaps-05af2de7-8947-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025941631s STEP: Saw pod success Apr 28 11:54:35.961: INFO: Pod "pod-projected-configmaps-05af2de7-8947-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:54:35.964: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-05af2de7-8947-11ea-80e8-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Apr 28 11:54:35.995: INFO: Waiting for pod pod-projected-configmaps-05af2de7-8947-11ea-80e8-0242ac11000f to disappear Apr 28 11:54:36.021: INFO: Pod pod-projected-configmaps-05af2de7-8947-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:54:36.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vwq2m" for this suite. Apr 28 11:54:42.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:54:42.125: INFO: namespace: e2e-tests-projected-vwq2m, resource: bindings, ignored listing per whitelist Apr 28 11:54:42.166: INFO: namespace e2e-tests-projected-vwq2m deletion completed in 6.141869804s • [SLOW TEST:10.335 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:54:42.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Apr 28 11:54:42.761: INFO: Waiting up to 5m0s for pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kp8wr" in namespace "e2e-tests-svcaccounts-rqlf9" to be "success or failure" Apr 28 11:54:42.768: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kp8wr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202063ms Apr 28 11:54:44.771: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kp8wr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009279804s Apr 28 11:54:46.824: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kp8wr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062921802s Apr 28 11:54:48.829: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kp8wr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06758871s STEP: Saw pod success Apr 28 11:54:48.829: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kp8wr" satisfied condition "success or failure" Apr 28 11:54:48.832: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kp8wr container token-test: STEP: delete the pod Apr 28 11:54:48.866: INFO: Waiting for pod pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kp8wr to disappear Apr 28 11:54:48.879: INFO: Pod pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kp8wr no longer exists STEP: Creating a pod to test consume service account root CA Apr 28 11:54:48.883: INFO: Waiting up to 5m0s for pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kwtbz" in namespace "e2e-tests-svcaccounts-rqlf9" to be "success or failure" Apr 28 11:54:48.886: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kwtbz": Phase="Pending", Reason="", readiness=false. Elapsed: 3.165136ms Apr 28 11:54:50.898: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kwtbz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014419575s Apr 28 11:54:52.902: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kwtbz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018515715s Apr 28 11:54:54.905: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kwtbz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022302168s STEP: Saw pod success Apr 28 11:54:54.906: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kwtbz" satisfied condition "success or failure" Apr 28 11:54:54.908: INFO: Trying to get logs from node hunter-worker pod pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kwtbz container root-ca-test: STEP: delete the pod Apr 28 11:54:54.969: INFO: Waiting for pod pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kwtbz to disappear Apr 28 11:54:54.975: INFO: Pod pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-kwtbz no longer exists STEP: Creating a pod to test consume service account namespace Apr 28 11:54:54.978: INFO: Waiting up to 5m0s for pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-wh92c" in namespace "e2e-tests-svcaccounts-rqlf9" to be "success or failure" Apr 28 11:54:54.981: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-wh92c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.024728ms Apr 28 11:54:56.987: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-wh92c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009687047s Apr 28 11:54:58.991: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-wh92c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013208044s Apr 28 11:55:00.995: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-wh92c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016994498s STEP: Saw pod success Apr 28 11:55:00.995: INFO: Pod "pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-wh92c" satisfied condition "success or failure" Apr 28 11:55:00.997: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-wh92c container namespace-test: STEP: delete the pod Apr 28 11:55:01.089: INFO: Waiting for pod pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-wh92c to disappear Apr 28 11:55:01.122: INFO: Pod pod-service-account-0c23721f-8947-11ea-80e8-0242ac11000f-wh92c no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:55:01.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-rqlf9" for this suite. Apr 28 11:55:07.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:55:07.232: INFO: namespace: e2e-tests-svcaccounts-rqlf9, resource: bindings, ignored listing per whitelist Apr 28 11:55:07.250: INFO: namespace e2e-tests-svcaccounts-rqlf9 deletion completed in 6.124489469s • [SLOW TEST:25.083 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:55:07.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Apr 28 11:55:07.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 28 11:55:07.536: INFO: stderr: "" Apr 28 11:55:07.536: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:55:07.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xkvgq" for this suite. Apr 28 11:55:13.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:55:13.591: INFO: namespace: e2e-tests-kubectl-xkvgq, resource: bindings, ignored listing per whitelist Apr 28 11:55:13.650: INFO: namespace e2e-tests-kubectl-xkvgq deletion completed in 6.109150618s • [SLOW TEST:6.399 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:55:13.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Apr 28 11:55:18.324: INFO: Successfully updated pod "labelsupdate1e9c5b81-8947-11ea-80e8-0242ac11000f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:55:20.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7zvpz" for this suite. Apr 28 11:55:42.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:55:42.390: INFO: namespace: e2e-tests-projected-7zvpz, resource: bindings, ignored listing per whitelist Apr 28 11:55:42.459: INFO: namespace e2e-tests-projected-7zvpz deletion completed in 22.094340986s • [SLOW TEST:28.810 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:55:42.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:55:42.569: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fc59c3a-8947-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-xl7wt" to be "success or failure" Apr 28 11:55:42.590: INFO: Pod "downwardapi-volume-2fc59c3a-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.878837ms Apr 28 11:55:44.615: INFO: Pod "downwardapi-volume-2fc59c3a-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045925125s Apr 28 11:55:46.620: INFO: Pod "downwardapi-volume-2fc59c3a-8947-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050305351s STEP: Saw pod success Apr 28 11:55:46.620: INFO: Pod "downwardapi-volume-2fc59c3a-8947-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:55:46.623: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2fc59c3a-8947-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:55:46.638: INFO: Waiting for pod downwardapi-volume-2fc59c3a-8947-11ea-80e8-0242ac11000f to disappear Apr 28 11:55:46.643: INFO: Pod downwardapi-volume-2fc59c3a-8947-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:55:46.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xl7wt" for this suite. Apr 28 11:55:52.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:55:52.773: INFO: namespace: e2e-tests-downward-api-xl7wt, resource: bindings, ignored listing per whitelist Apr 28 11:55:52.797: INFO: namespace e2e-tests-downward-api-xl7wt deletion completed in 6.151429307s • [SLOW TEST:10.338 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:55:52.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-35f29b8d-8947-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 11:55:52.912: INFO: Waiting up to 5m0s for pod "pod-secrets-35f354c4-8947-11ea-80e8-0242ac11000f" in namespace "e2e-tests-secrets-4fxrn" to be "success or failure" Apr 28 11:55:52.927: INFO: Pod "pod-secrets-35f354c4-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.601173ms Apr 28 11:55:54.931: INFO: Pod "pod-secrets-35f354c4-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019543549s Apr 28 11:55:56.935: INFO: Pod "pod-secrets-35f354c4-8947-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023466237s STEP: Saw pod success Apr 28 11:55:56.935: INFO: Pod "pod-secrets-35f354c4-8947-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:55:56.938: INFO: Trying to get logs from node hunter-worker pod pod-secrets-35f354c4-8947-11ea-80e8-0242ac11000f container secret-env-test: STEP: delete the pod Apr 28 11:55:56.971: INFO: Waiting for pod pod-secrets-35f354c4-8947-11ea-80e8-0242ac11000f to disappear Apr 28 11:55:57.004: INFO: Pod pod-secrets-35f354c4-8947-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:55:57.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4fxrn" for this suite. Apr 28 11:56:03.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:56:03.063: INFO: namespace: e2e-tests-secrets-4fxrn, resource: bindings, ignored listing per whitelist Apr 28 11:56:03.107: INFO: namespace e2e-tests-secrets-4fxrn deletion completed in 6.098937549s • [SLOW TEST:10.309 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:56:03.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 11:56:03.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c1ad48b-8947-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-j826f" to be "success or failure" Apr 28 11:56:03.259: INFO: Pod "downwardapi-volume-3c1ad48b-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.784738ms Apr 28 11:56:05.263: INFO: Pod "downwardapi-volume-3c1ad48b-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027548681s Apr 28 11:56:07.267: INFO: Pod "downwardapi-volume-3c1ad48b-8947-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031592804s STEP: Saw pod success Apr 28 11:56:07.267: INFO: Pod "downwardapi-volume-3c1ad48b-8947-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:56:07.270: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3c1ad48b-8947-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 11:56:07.307: INFO: Waiting for pod downwardapi-volume-3c1ad48b-8947-11ea-80e8-0242ac11000f to disappear Apr 28 11:56:07.376: INFO: Pod downwardapi-volume-3c1ad48b-8947-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:56:07.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j826f" for this suite. Apr 28 11:56:13.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:56:13.412: INFO: namespace: e2e-tests-projected-j826f, resource: bindings, ignored listing per whitelist Apr 28 11:56:13.464: INFO: namespace e2e-tests-projected-j826f deletion completed in 6.08476651s • [SLOW TEST:10.357 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:56:13.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-424190d7-8947-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 11:56:13.592: INFO: Waiting up to 5m0s for pod "pod-configmaps-42471020-8947-11ea-80e8-0242ac11000f" in namespace "e2e-tests-configmap-7knl9" to be "success or failure" Apr 28 11:56:13.602: INFO: Pod "pod-configmaps-42471020-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.876775ms Apr 28 11:56:15.606: INFO: Pod "pod-configmaps-42471020-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014005417s Apr 28 11:56:17.610: INFO: Pod "pod-configmaps-42471020-8947-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018282248s STEP: Saw pod success Apr 28 11:56:17.610: INFO: Pod "pod-configmaps-42471020-8947-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:56:17.614: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-42471020-8947-11ea-80e8-0242ac11000f container configmap-volume-test: STEP: delete the pod Apr 28 11:56:17.631: INFO: Waiting for pod pod-configmaps-42471020-8947-11ea-80e8-0242ac11000f to disappear Apr 28 11:56:17.635: INFO: Pod pod-configmaps-42471020-8947-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:56:17.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7knl9" for this suite. Apr 28 11:56:23.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:56:23.723: INFO: namespace: e2e-tests-configmap-7knl9, resource: bindings, ignored listing per whitelist Apr 28 11:56:23.730: INFO: namespace e2e-tests-configmap-7knl9 deletion completed in 6.091583732s • [SLOW TEST:10.266 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:56:23.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-485e1fbc-8947-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 11:56:23.823: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-485ff65f-8947-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-4rgjd" to be "success or failure" Apr 28 11:56:23.827: INFO: Pod "pod-projected-configmaps-485ff65f-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.628942ms Apr 28 11:56:25.831: INFO: Pod "pod-projected-configmaps-485ff65f-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008080509s Apr 28 11:56:27.836: INFO: Pod "pod-projected-configmaps-485ff65f-8947-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012334746s STEP: Saw pod success Apr 28 11:56:27.836: INFO: Pod "pod-projected-configmaps-485ff65f-8947-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:56:27.839: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-485ff65f-8947-11ea-80e8-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Apr 28 11:56:27.873: INFO: Waiting for pod pod-projected-configmaps-485ff65f-8947-11ea-80e8-0242ac11000f to disappear Apr 28 11:56:27.876: INFO: Pod pod-projected-configmaps-485ff65f-8947-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:56:27.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4rgjd" for this suite. Apr 28 11:56:33.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:56:33.992: INFO: namespace: e2e-tests-projected-4rgjd, resource: bindings, ignored listing per whitelist Apr 28 11:56:34.006: INFO: namespace e2e-tests-projected-4rgjd deletion completed in 6.126226607s • [SLOW TEST:10.276 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:56:34.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 28 11:56:42.205: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 11:56:42.208: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 11:56:44.208: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 11:56:44.212: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 11:56:46.208: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 11:56:46.212: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 11:56:48.208: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 11:56:48.212: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 11:56:50.208: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 11:56:50.212: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 11:56:52.208: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 11:56:52.212: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:56:52.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ngh4q" for this suite. Apr 28 11:57:14.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:57:14.298: INFO: namespace: e2e-tests-container-lifecycle-hook-ngh4q, resource: bindings, ignored listing per whitelist Apr 28 11:57:14.345: INFO: namespace e2e-tests-container-lifecycle-hook-ngh4q deletion completed in 22.099967259s • [SLOW TEST:40.339 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:57:14.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Apr 28 11:57:14.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7mk5j' Apr 28 11:57:16.982: INFO: stderr: "" Apr 28 11:57:16.982: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Apr 28 11:57:17.986: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:57:17.986: INFO: Found 0 / 1 Apr 28 11:57:18.986: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:57:18.986: INFO: Found 0 / 1 Apr 28 11:57:19.987: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:57:19.987: INFO: Found 0 / 1 Apr 28 11:57:20.986: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:57:20.986: INFO: Found 1 / 1 Apr 28 11:57:20.986: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 28 11:57:20.990: INFO: Selector matched 1 pods for map[app:redis] Apr 28 11:57:20.990: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 28 11:57:20.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v74t8 redis-master --namespace=e2e-tests-kubectl-7mk5j' Apr 28 11:57:21.108: INFO: stderr: "" Apr 28 11:57:21.109: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Apr 11:57:19.777 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Apr 11:57:19.777 # Server started, Redis version 3.2.12\n1:M 28 Apr 11:57:19.778 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Apr 11:57:19.778 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 28 11:57:21.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-v74t8 redis-master --namespace=e2e-tests-kubectl-7mk5j --tail=1' Apr 28 11:57:21.227: INFO: stderr: "" Apr 28 11:57:21.227: INFO: stdout: "1:M 28 Apr 11:57:19.778 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 28 11:57:21.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-v74t8 redis-master --namespace=e2e-tests-kubectl-7mk5j --limit-bytes=1' Apr 28 11:57:21.331: INFO: stderr: "" Apr 28 11:57:21.331: INFO: stdout: " " STEP: exposing timestamps Apr 28 11:57:21.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-v74t8 redis-master --namespace=e2e-tests-kubectl-7mk5j --tail=1 --timestamps' Apr 28 11:57:21.460: INFO: stderr: "" Apr 28 11:57:21.460: INFO: stdout: "2020-04-28T11:57:19.778214534Z 1:M 28 Apr 11:57:19.778 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 28 11:57:23.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-v74t8 redis-master --namespace=e2e-tests-kubectl-7mk5j --since=1s' Apr 28 11:57:24.082: INFO: stderr: "" Apr 28 11:57:24.082: INFO: stdout: "" Apr 28 11:57:24.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-v74t8 redis-master --namespace=e2e-tests-kubectl-7mk5j --since=24h' Apr 28 11:57:24.189: INFO: stderr: "" Apr 28 11:57:24.189: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Apr 11:57:19.777 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Apr 11:57:19.777 # Server started, Redis version 3.2.12\n1:M 28 Apr 11:57:19.778 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Apr 11:57:19.778 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Apr 28 11:57:24.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7mk5j' Apr 28 11:57:24.277: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 11:57:24.277: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 28 11:57:24.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-7mk5j' Apr 28 11:57:24.376: INFO: stderr: "No resources found.\n" Apr 28 11:57:24.376: INFO: stdout: "" Apr 28 11:57:24.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-7mk5j -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 11:57:24.466: INFO: stderr: "" Apr 28 11:57:24.466: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:57:24.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7mk5j" for this suite. Apr 28 11:57:46.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:57:46.506: INFO: namespace: e2e-tests-kubectl-7mk5j, resource: bindings, ignored listing per whitelist Apr 28 11:57:46.556: INFO: namespace e2e-tests-kubectl-7mk5j deletion completed in 22.08702221s • [SLOW TEST:32.211 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:57:46.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Apr 28 11:57:46.654: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix618432748/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:57:46.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mqpcp" for this suite. Apr 28 11:57:52.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:57:52.814: INFO: namespace: e2e-tests-kubectl-mqpcp, resource: bindings, ignored listing per whitelist Apr 28 11:57:52.831: INFO: namespace e2e-tests-kubectl-mqpcp deletion completed in 6.093931456s • [SLOW TEST:6.275 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:57:52.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-7d775624-8947-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 11:57:52.923: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7d79f767-8947-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-xlkkc" to be "success or failure" Apr 28 11:57:52.934: INFO: Pod "pod-projected-configmaps-7d79f767-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.307236ms Apr 28 11:57:54.995: INFO: Pod "pod-projected-configmaps-7d79f767-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071304438s Apr 28 11:57:56.999: INFO: Pod "pod-projected-configmaps-7d79f767-8947-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075753397s STEP: Saw pod success Apr 28 11:57:56.999: INFO: Pod "pod-projected-configmaps-7d79f767-8947-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:57:57.002: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-7d79f767-8947-11ea-80e8-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Apr 28 11:57:57.037: INFO: Waiting for pod pod-projected-configmaps-7d79f767-8947-11ea-80e8-0242ac11000f to disappear Apr 28 11:57:57.041: INFO: Pod pod-projected-configmaps-7d79f767-8947-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:57:57.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xlkkc" for this suite. Apr 28 11:58:03.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:58:03.137: INFO: namespace: e2e-tests-projected-xlkkc, resource: bindings, ignored listing per whitelist Apr 28 11:58:03.139: INFO: namespace e2e-tests-projected-xlkkc deletion completed in 6.093939877s • [SLOW TEST:10.307 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:58:03.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-839fb9ae-8947-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 11:58:03.243: INFO: Waiting up to 5m0s for pod "pod-configmaps-83a265ad-8947-11ea-80e8-0242ac11000f" in namespace "e2e-tests-configmap-9n4kk" to be "success or failure" Apr 28 11:58:03.248: INFO: Pod "pod-configmaps-83a265ad-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338553ms Apr 28 11:58:05.252: INFO: Pod "pod-configmaps-83a265ad-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008424975s Apr 28 11:58:07.260: INFO: Pod "pod-configmaps-83a265ad-8947-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01687077s STEP: Saw pod success Apr 28 11:58:07.260: INFO: Pod "pod-configmaps-83a265ad-8947-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:58:07.266: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-83a265ad-8947-11ea-80e8-0242ac11000f container configmap-volume-test: STEP: delete the pod Apr 28 11:58:07.292: INFO: Waiting for pod pod-configmaps-83a265ad-8947-11ea-80e8-0242ac11000f to disappear Apr 28 11:58:07.323: INFO: Pod pod-configmaps-83a265ad-8947-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:58:07.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9n4kk" for this suite. Apr 28 11:58:13.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:58:13.442: INFO: namespace: e2e-tests-configmap-9n4kk, resource: bindings, ignored listing per whitelist Apr 28 11:58:13.469: INFO: namespace e2e-tests-configmap-9n4kk deletion completed in 6.142588472s • [SLOW TEST:10.330 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:58:13.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 28 11:58:13.638: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:13.640: INFO: Number of nodes with available pods: 0 Apr 28 11:58:13.640: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:58:14.645: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:14.648: INFO: Number of nodes with available pods: 0 Apr 28 11:58:14.648: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:58:15.645: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:15.648: INFO: Number of nodes with available pods: 0 Apr 28 11:58:15.648: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:58:16.645: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:16.648: INFO: Number of nodes with available pods: 0 Apr 28 11:58:16.648: INFO: Node hunter-worker is running more than one daemon pod Apr 28 11:58:17.645: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:17.648: INFO: Number of nodes with available pods: 2 Apr 28 11:58:17.648: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 28 11:58:17.679: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:17.682: INFO: Number of nodes with available pods: 1 Apr 28 11:58:17.682: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:18.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:18.691: INFO: Number of nodes with available pods: 1 Apr 28 11:58:18.691: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:19.685: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:19.687: INFO: Number of nodes with available pods: 1 Apr 28 11:58:19.687: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:20.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:20.691: INFO: Number of nodes with available pods: 1 Apr 28 11:58:20.691: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:21.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:21.695: INFO: Number of nodes with available pods: 1 Apr 28 11:58:21.695: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:22.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:22.691: INFO: Number of nodes with available pods: 1 Apr 28 11:58:22.691: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:23.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:23.690: INFO: Number of nodes with available pods: 1 Apr 28 11:58:23.690: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:24.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:24.691: INFO: Number of nodes with available pods: 1 Apr 28 11:58:24.691: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:25.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:25.690: INFO: Number of nodes with available pods: 1 Apr 28 11:58:25.690: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:26.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:26.691: INFO: Number of nodes with available pods: 1 Apr 28 11:58:26.691: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:27.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:27.691: INFO: Number of nodes with available pods: 1 Apr 28 11:58:27.691: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:28.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:28.691: INFO: Number of nodes with available pods: 1 Apr 28 11:58:28.691: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:29.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:29.691: INFO: Number of nodes with available pods: 1 Apr 28 11:58:29.691: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:30.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:30.690: INFO: Number of nodes with available pods: 1 Apr 28 11:58:30.690: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:31.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:31.691: INFO: Number of nodes with available pods: 1 Apr 28 11:58:31.691: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:32.686: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:32.690: INFO: Number of nodes with available pods: 1 Apr 28 11:58:32.690: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:33.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:33.690: INFO: Number of nodes with available pods: 1 Apr 28 11:58:33.690: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 11:58:34.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 11:58:34.690: INFO: Number of nodes with available pods: 2 Apr 28 11:58:34.690: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lczfs, will wait for the garbage collector to delete the pods Apr 28 11:58:34.764: INFO: Deleting DaemonSet.extensions daemon-set took: 17.267821ms Apr 28 11:58:34.864: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.226623ms Apr 28 11:58:38.677: INFO: Number of nodes with available pods: 0 Apr 28 11:58:38.677: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 11:58:38.679: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lczfs/daemonsets","resourceVersion":"7646393"},"items":null} Apr 28 11:58:38.682: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lczfs/pods","resourceVersion":"7646393"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:58:38.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-lczfs" for this suite. Apr 28 11:58:44.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:58:44.726: INFO: namespace: e2e-tests-daemonsets-lczfs, resource: bindings, ignored listing per whitelist Apr 28 11:58:44.780: INFO: namespace e2e-tests-daemonsets-lczfs deletion completed in 6.084801959s • [SLOW TEST:31.311 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:58:44.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Apr 28 11:58:44.928: INFO: Waiting up to 5m0s for pod "downward-api-9c7b73b1-8947-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-6f45b" to be "success or failure" Apr 28 11:58:44.934: INFO: Pod "downward-api-9c7b73b1-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031868ms Apr 28 11:58:46.989: INFO: Pod "downward-api-9c7b73b1-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060985997s Apr 28 11:58:48.993: INFO: Pod "downward-api-9c7b73b1-8947-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064839367s STEP: Saw pod success Apr 28 11:58:48.993: INFO: Pod "downward-api-9c7b73b1-8947-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 11:58:48.996: INFO: Trying to get logs from node hunter-worker2 pod downward-api-9c7b73b1-8947-11ea-80e8-0242ac11000f container dapi-container: STEP: delete the pod Apr 28 11:58:49.029: INFO: Waiting for pod downward-api-9c7b73b1-8947-11ea-80e8-0242ac11000f to disappear Apr 28 11:58:49.060: INFO: Pod downward-api-9c7b73b1-8947-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:58:49.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6f45b" for this suite. Apr 28 11:58:55.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:58:55.142: INFO: namespace: e2e-tests-downward-api-6f45b, resource: bindings, ignored listing per whitelist Apr 28 11:58:55.156: INFO: namespace e2e-tests-downward-api-6f45b deletion completed in 6.092419707s • [SLOW TEST:10.376 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:58:55.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:58:59.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tkwgj" for this suite. Apr 28 11:59:05.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 11:59:05.347: INFO: namespace: e2e-tests-kubelet-test-tkwgj, resource: bindings, ignored listing per whitelist Apr 28 11:59:05.394: INFO: namespace e2e-tests-kubelet-test-tkwgj deletion completed in 6.091631077s • [SLOW TEST:10.237 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 11:59:05.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 28 11:59:13.555: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:13.558: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:15.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:15.563: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:17.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:17.564: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:19.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:19.563: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:21.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:21.562: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:23.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:23.561: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:25.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:25.563: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:27.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:27.562: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:29.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:29.562: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:31.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:31.562: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:33.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:33.563: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:35.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:35.562: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:37.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:37.562: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:39.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:39.562: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:41.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:41.562: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 11:59:43.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 11:59:43.563: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 11:59:43.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-d89g7" for this suite. Apr 28 12:00:05.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:00:05.634: INFO: namespace: e2e-tests-container-lifecycle-hook-d89g7, resource: bindings, ignored listing per whitelist Apr 28 12:00:05.663: INFO: namespace e2e-tests-container-lifecycle-hook-d89g7 deletion completed in 22.096472191s • [SLOW TEST:60.269 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:00:05.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-82s2p STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-82s2p to expose endpoints map[] Apr 28 12:00:05.855: INFO: Get endpoints failed (12.853321ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 28 12:00:06.870: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-82s2p exposes endpoints map[] (1.027715921s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-82s2p STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-82s2p to expose endpoints map[pod1:[100]] Apr 28 12:00:09.911: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-82s2p exposes endpoints map[pod1:[100]] (3.033822218s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-82s2p STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-82s2p to expose endpoints map[pod1:[100] pod2:[101]] Apr 28 12:00:12.976: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-82s2p exposes endpoints map[pod2:[101] pod1:[100]] (3.06119912s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-82s2p STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-82s2p to expose endpoints map[pod2:[101]] Apr 28 12:00:14.035: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-82s2p exposes endpoints map[pod2:[101]] (1.054364952s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-82s2p STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-82s2p to expose endpoints map[] Apr 28 12:00:15.086: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-82s2p exposes endpoints map[] (1.045152611s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:00:15.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-82s2p" for this suite. Apr 28 12:00:37.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:00:37.238: INFO: namespace: e2e-tests-services-82s2p, resource: bindings, ignored listing per whitelist Apr 28 12:00:37.246: INFO: namespace e2e-tests-services-82s2p deletion completed in 22.10443953s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.582 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:00:37.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-df7d1597-8947-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 12:00:37.372: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-df7f9b60-8947-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-96sn5" to be "success or failure" Apr 28 12:00:37.376: INFO: Pod "pod-projected-secrets-df7f9b60-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073812ms Apr 28 12:00:39.381: INFO: Pod "pod-projected-secrets-df7f9b60-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008365158s Apr 28 12:00:41.384: INFO: Pod "pod-projected-secrets-df7f9b60-8947-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011817022s STEP: Saw pod success Apr 28 12:00:41.384: INFO: Pod "pod-projected-secrets-df7f9b60-8947-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:00:41.387: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-df7f9b60-8947-11ea-80e8-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Apr 28 12:00:41.463: INFO: Waiting for pod pod-projected-secrets-df7f9b60-8947-11ea-80e8-0242ac11000f to disappear Apr 28 12:00:41.472: INFO: Pod pod-projected-secrets-df7f9b60-8947-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:00:41.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-96sn5" for this suite. Apr 28 12:00:47.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:00:47.499: INFO: namespace: e2e-tests-projected-96sn5, resource: bindings, ignored listing per whitelist Apr 28 12:00:47.551: INFO: namespace e2e-tests-projected-96sn5 deletion completed in 6.076841456s • [SLOW TEST:10.305 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:00:47.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 12:00:51.750: INFO: Waiting up to 5m0s for pod "client-envvars-e811d91f-8947-11ea-80e8-0242ac11000f" in namespace "e2e-tests-pods-vb5nz" to be "success or failure" Apr 28 12:00:51.776: INFO: Pod "client-envvars-e811d91f-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.152698ms Apr 28 12:00:53.780: INFO: Pod "client-envvars-e811d91f-8947-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030054019s Apr 28 12:00:55.784: INFO: Pod "client-envvars-e811d91f-8947-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033964839s STEP: Saw pod success Apr 28 12:00:55.784: INFO: Pod "client-envvars-e811d91f-8947-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:00:55.787: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-e811d91f-8947-11ea-80e8-0242ac11000f container env3cont: STEP: delete the pod Apr 28 12:00:55.831: INFO: Waiting for pod client-envvars-e811d91f-8947-11ea-80e8-0242ac11000f to disappear Apr 28 12:00:55.855: INFO: Pod client-envvars-e811d91f-8947-11ea-80e8-0242ac11000f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:00:55.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vb5nz" for this suite. Apr 28 12:01:35.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:01:35.911: INFO: namespace: e2e-tests-pods-vb5nz, resource: bindings, ignored listing per whitelist Apr 28 12:01:35.950: INFO: namespace e2e-tests-pods-vb5nz deletion completed in 40.091531283s • [SLOW TEST:48.398 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:01:35.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 28 12:01:36.129: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-f5jwv,SelfLink:/api/v1/namespaces/e2e-tests-watch-f5jwv/configmaps/e2e-watch-test-resource-version,UID:02795317-8948-11ea-99e8-0242ac110002,ResourceVersion:7646980,Generation:0,CreationTimestamp:2020-04-28 12:01:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 28 12:01:36.129: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-f5jwv,SelfLink:/api/v1/namespaces/e2e-tests-watch-f5jwv/configmaps/e2e-watch-test-resource-version,UID:02795317-8948-11ea-99e8-0242ac110002,ResourceVersion:7646981,Generation:0,CreationTimestamp:2020-04-28 12:01:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:01:36.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-f5jwv" for this suite. Apr 28 12:01:42.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:01:42.210: INFO: namespace: e2e-tests-watch-f5jwv, resource: bindings, ignored listing per whitelist Apr 28 12:01:42.224: INFO: namespace e2e-tests-watch-f5jwv deletion completed in 6.091963633s • [SLOW TEST:6.274 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:01:42.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6zzbb [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Apr 28 12:01:42.328: INFO: Found 0 stateful pods, waiting for 3 Apr 28 12:01:52.334: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 12:01:52.334: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 12:01:52.334: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 28 12:02:02.333: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 12:02:02.333: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 12:02:02.333: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 28 12:02:02.362: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 28 12:02:12.412: INFO: Updating stateful set ss2 Apr 28 12:02:12.422: INFO: Waiting for Pod e2e-tests-statefulset-6zzbb/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 28 12:02:22.532: INFO: Found 2 stateful pods, waiting for 3 Apr 28 12:02:32.537: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 12:02:32.537: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 12:02:32.537: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 28 12:02:32.568: INFO: Updating stateful set ss2 Apr 28 12:02:32.574: INFO: Waiting for Pod e2e-tests-statefulset-6zzbb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 28 12:02:43.773: INFO: Updating stateful set ss2 Apr 28 12:02:43.989: INFO: Waiting for StatefulSet e2e-tests-statefulset-6zzbb/ss2 to complete update Apr 28 12:02:43.989: INFO: Waiting for Pod e2e-tests-statefulset-6zzbb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 28 12:02:53.998: INFO: Waiting for StatefulSet e2e-tests-statefulset-6zzbb/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Apr 28 12:03:03.999: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6zzbb Apr 28 12:03:04.002: INFO: Scaling statefulset ss2 to 0 Apr 28 12:03:24.018: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 12:03:24.021: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:03:24.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6zzbb" for this suite. Apr 28 12:03:30.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:03:30.143: INFO: namespace: e2e-tests-statefulset-6zzbb, resource: bindings, ignored listing per whitelist Apr 28 12:03:30.179: INFO: namespace e2e-tests-statefulset-6zzbb deletion completed in 6.140519371s • [SLOW TEST:107.955 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:03:30.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-zqhmv Apr 28 12:03:34.311: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-zqhmv STEP: checking the pod's current state and verifying that restartCount is present Apr 28 12:03:34.314: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:07:35.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-zqhmv" for this suite. Apr 28 12:07:41.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:07:41.832: INFO: namespace: e2e-tests-container-probe-zqhmv, resource: bindings, ignored listing per whitelist Apr 28 12:07:41.843: INFO: namespace e2e-tests-container-probe-zqhmv deletion completed in 6.195338686s • [SLOW TEST:251.663 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:07:41.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Apr 28 12:07:41.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8qnrh' Apr 28 12:07:44.428: INFO: stderr: "" Apr 28 12:07:44.428: INFO: stdout: "pod/pause created\n" Apr 28 12:07:44.428: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 28 12:07:44.428: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-8qnrh" to be "running and ready" Apr 28 12:07:44.438: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.245907ms Apr 28 12:07:46.442: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013886504s Apr 28 12:07:48.446: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.01800134s Apr 28 12:07:48.446: INFO: Pod "pause" satisfied condition "running and ready" Apr 28 12:07:48.446: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Apr 28 12:07:48.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-8qnrh' Apr 28 12:07:48.553: INFO: stderr: "" Apr 28 12:07:48.553: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 28 12:07:48.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8qnrh' Apr 28 12:07:48.651: INFO: stderr: "" Apr 28 12:07:48.651: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 28 12:07:48.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-8qnrh' Apr 28 12:07:48.773: INFO: stderr: "" Apr 28 12:07:48.773: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 28 12:07:48.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8qnrh' Apr 28 12:07:48.886: INFO: stderr: "" Apr 28 12:07:48.886: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Apr 28 12:07:48.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8qnrh' Apr 28 12:07:49.017: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 12:07:49.018: INFO: stdout: "pod \"pause\" force deleted\n" Apr 28 12:07:49.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-8qnrh' Apr 28 12:07:49.145: INFO: stderr: "No resources found.\n" Apr 28 12:07:49.145: INFO: stdout: "" Apr 28 12:07:49.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-8qnrh -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 12:07:49.241: INFO: stderr: "" Apr 28 12:07:49.241: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:07:49.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8qnrh" for this suite. Apr 28 12:07:55.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:07:55.462: INFO: namespace: e2e-tests-kubectl-8qnrh, resource: bindings, ignored listing per whitelist Apr 28 12:07:55.468: INFO: namespace e2e-tests-kubectl-8qnrh deletion completed in 6.124407693s • [SLOW TEST:13.626 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:07:55.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 28 12:07:55.655: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:07:55.657: INFO: Number of nodes with available pods: 0 Apr 28 12:07:55.657: INFO: Node hunter-worker is running more than one daemon pod Apr 28 12:07:56.662: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:07:56.665: INFO: Number of nodes with available pods: 0 Apr 28 12:07:56.665: INFO: Node hunter-worker is running more than one daemon pod Apr 28 12:07:57.702: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:07:57.706: INFO: Number of nodes with available pods: 0 Apr 28 12:07:57.706: INFO: Node hunter-worker is running more than one daemon pod Apr 28 12:07:58.974: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:07:58.982: INFO: Number of nodes with available pods: 0 Apr 28 12:07:58.982: INFO: Node hunter-worker is running more than one daemon pod Apr 28 12:07:59.661: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:07:59.664: INFO: Number of nodes with available pods: 1 Apr 28 12:07:59.664: INFO: Node hunter-worker2 is running more than one daemon pod Apr 28 12:08:00.662: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:08:00.665: INFO: Number of nodes with available pods: 2 Apr 28 12:08:00.665: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 28 12:08:00.712: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:08:00.725: INFO: Number of nodes with available pods: 2 Apr 28 12:08:00.725: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2z2lr, will wait for the garbage collector to delete the pods Apr 28 12:08:01.795: INFO: Deleting DaemonSet.extensions daemon-set took: 5.72676ms Apr 28 12:08:01.895: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.222773ms Apr 28 12:08:11.307: INFO: Number of nodes with available pods: 0 Apr 28 12:08:11.307: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 12:08:11.310: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2z2lr/daemonsets","resourceVersion":"7648099"},"items":null} Apr 28 12:08:11.312: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2z2lr/pods","resourceVersion":"7648099"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:08:11.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-2z2lr" for this suite. Apr 28 12:08:17.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:08:17.395: INFO: namespace: e2e-tests-daemonsets-2z2lr, resource: bindings, ignored listing per whitelist Apr 28 12:08:17.414: INFO: namespace e2e-tests-daemonsets-2z2lr deletion completed in 6.089841222s • [SLOW TEST:21.945 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:08:17.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Apr 28 12:08:17.482: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 28 12:08:17.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:17.749: INFO: stderr: "" Apr 28 12:08:17.749: INFO: stdout: "service/redis-slave created\n" Apr 28 12:08:17.750: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 28 12:08:17.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:18.080: INFO: stderr: "" Apr 28 12:08:18.080: INFO: stdout: "service/redis-master created\n" Apr 28 12:08:18.080: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 28 12:08:18.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:18.403: INFO: stderr: "" Apr 28 12:08:18.403: INFO: stdout: "service/frontend created\n" Apr 28 12:08:18.404: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 28 12:08:18.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:18.658: INFO: stderr: "" Apr 28 12:08:18.658: INFO: stdout: "deployment.extensions/frontend created\n" Apr 28 12:08:18.658: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 28 12:08:18.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:18.982: INFO: stderr: "" Apr 28 12:08:18.982: INFO: stdout: "deployment.extensions/redis-master created\n" Apr 28 12:08:18.982: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 28 12:08:18.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:19.275: INFO: stderr: "" Apr 28 12:08:19.275: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Apr 28 12:08:19.275: INFO: Waiting for all frontend pods to be Running. Apr 28 12:08:29.326: INFO: Waiting for frontend to serve content. Apr 28 12:08:29.349: INFO: Trying to add a new entry to the guestbook. Apr 28 12:08:29.364: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 28 12:08:29.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:29.534: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 12:08:29.534: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 28 12:08:29.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:29.664: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 12:08:29.664: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 28 12:08:29.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:29.788: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 12:08:29.788: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 28 12:08:29.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:29.891: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 12:08:29.891: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 28 12:08:29.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:30.008: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 12:08:30.008: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 28 12:08:30.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-86rdf' Apr 28 12:08:30.204: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 12:08:30.204: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:08:30.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-86rdf" for this suite. Apr 28 12:09:12.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:09:12.629: INFO: namespace: e2e-tests-kubectl-86rdf, resource: bindings, ignored listing per whitelist Apr 28 12:09:12.663: INFO: namespace e2e-tests-kubectl-86rdf deletion completed in 42.348262582s • [SLOW TEST:55.249 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:09:12.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-12b5c12a-8949-11ea-80e8-0242ac11000f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:09:18.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-f6d2v" for this suite. Apr 28 12:09:40.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:09:40.952: INFO: namespace: e2e-tests-configmap-f6d2v, resource: bindings, ignored listing per whitelist Apr 28 12:09:40.979: INFO: namespace e2e-tests-configmap-f6d2v deletion completed in 22.136132345s • [SLOW TEST:28.315 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:09:40.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-23970414-8949-11ea-80e8-0242ac11000f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-23970414-8949-11ea-80e8-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:09:47.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8d9l9" for this suite. Apr 28 12:10:09.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:10:09.208: INFO: namespace: e2e-tests-configmap-8d9l9, resource: bindings, ignored listing per whitelist Apr 28 12:10:09.227: INFO: namespace e2e-tests-configmap-8d9l9 deletion completed in 22.083840627s • [SLOW TEST:28.248 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:10:09.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:10:13.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-pdqn7" for this suite. Apr 28 12:11:03.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:11:03.487: INFO: namespace: e2e-tests-kubelet-test-pdqn7, resource: bindings, ignored listing per whitelist Apr 28 12:11:03.491: INFO: namespace e2e-tests-kubelet-test-pdqn7 deletion completed in 50.112562584s • [SLOW TEST:54.264 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:11:03.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Apr 28 12:11:08.174: INFO: Successfully updated pod "annotationupdate54c24e4a-8949-11ea-80e8-0242ac11000f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:11:10.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gqgwd" for this suite. Apr 28 12:11:32.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:11:32.265: INFO: namespace: e2e-tests-downward-api-gqgwd, resource: bindings, ignored listing per whitelist Apr 28 12:11:32.294: INFO: namespace e2e-tests-downward-api-gqgwd deletion completed in 22.094198903s • [SLOW TEST:28.802 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:11:32.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2km92 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-2km92 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-2km92 Apr 28 12:11:32.419: INFO: Found 0 stateful pods, waiting for 1 Apr 28 12:11:42.424: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 28 12:11:42.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2km92 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 12:11:42.701: INFO: stderr: "I0428 12:11:42.559778 3383 log.go:172] (0xc000162840) (0xc00078a640) Create stream\nI0428 12:11:42.559860 3383 log.go:172] (0xc000162840) (0xc00078a640) Stream added, broadcasting: 1\nI0428 12:11:42.563491 3383 log.go:172] (0xc000162840) Reply frame received for 1\nI0428 12:11:42.563554 3383 log.go:172] (0xc000162840) (0xc0005f2d20) Create stream\nI0428 12:11:42.563575 3383 log.go:172] (0xc000162840) (0xc0005f2d20) Stream added, broadcasting: 3\nI0428 12:11:42.564567 3383 log.go:172] (0xc000162840) Reply frame received for 3\nI0428 12:11:42.564634 3383 log.go:172] (0xc000162840) (0xc00078a6e0) Create stream\nI0428 12:11:42.564667 3383 log.go:172] (0xc000162840) (0xc00078a6e0) Stream added, broadcasting: 5\nI0428 12:11:42.565632 3383 log.go:172] (0xc000162840) Reply frame received for 5\nI0428 12:11:42.695301 3383 log.go:172] (0xc000162840) Data frame received for 3\nI0428 12:11:42.695339 3383 log.go:172] (0xc0005f2d20) (3) Data frame handling\nI0428 12:11:42.695348 3383 log.go:172] (0xc0005f2d20) (3) Data frame sent\nI0428 12:11:42.695353 3383 log.go:172] (0xc000162840) Data frame received for 3\nI0428 12:11:42.695357 3383 log.go:172] (0xc0005f2d20) (3) Data frame handling\nI0428 12:11:42.695381 3383 log.go:172] (0xc000162840) Data frame received for 5\nI0428 12:11:42.695389 3383 log.go:172] (0xc00078a6e0) (5) Data frame handling\nI0428 12:11:42.697658 3383 log.go:172] (0xc000162840) Data frame received for 1\nI0428 12:11:42.697703 3383 log.go:172] (0xc00078a640) (1) Data frame handling\nI0428 12:11:42.697716 3383 log.go:172] (0xc00078a640) (1) Data frame sent\nI0428 12:11:42.697725 3383 log.go:172] (0xc000162840) (0xc00078a640) Stream removed, broadcasting: 1\nI0428 12:11:42.697822 3383 log.go:172] (0xc000162840) Go away received\nI0428 12:11:42.697884 3383 log.go:172] (0xc000162840) (0xc00078a640) Stream removed, broadcasting: 1\nI0428 12:11:42.697902 3383 log.go:172] (0xc000162840) (0xc0005f2d20) Stream removed, broadcasting: 3\nI0428 12:11:42.697911 3383 log.go:172] (0xc000162840) (0xc00078a6e0) Stream removed, broadcasting: 5\n" Apr 28 12:11:42.701: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 12:11:42.701: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 12:11:42.705: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 28 12:11:52.709: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 12:11:52.709: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 12:11:52.731: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 12:11:52.732: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC }] Apr 28 12:11:52.732: INFO: Apr 28 12:11:52.732: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 28 12:11:53.736: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986794018s Apr 28 12:11:54.850: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982281713s Apr 28 12:11:55.854: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.86882922s Apr 28 12:11:56.859: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.864657326s Apr 28 12:11:57.864: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.85970757s Apr 28 12:11:58.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.854420669s Apr 28 12:11:59.880: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.849083947s Apr 28 12:12:00.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.83848075s Apr 28 12:12:01.890: INFO: Verifying statefulset ss doesn't scale past 3 for another 834.075038ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-2km92 Apr 28 12:12:02.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2km92 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 12:12:03.150: INFO: stderr: "I0428 12:12:03.059192 3406 log.go:172] (0xc000138790) (0xc000738640) Create stream\nI0428 12:12:03.059256 3406 log.go:172] (0xc000138790) (0xc000738640) Stream added, broadcasting: 1\nI0428 12:12:03.062179 3406 log.go:172] (0xc000138790) Reply frame received for 1\nI0428 12:12:03.062222 3406 log.go:172] (0xc000138790) (0xc0005f8dc0) Create stream\nI0428 12:12:03.062237 3406 log.go:172] (0xc000138790) (0xc0005f8dc0) Stream added, broadcasting: 3\nI0428 12:12:03.063101 3406 log.go:172] (0xc000138790) Reply frame received for 3\nI0428 12:12:03.063164 3406 log.go:172] (0xc000138790) (0xc000024000) Create stream\nI0428 12:12:03.063184 3406 log.go:172] (0xc000138790) (0xc000024000) Stream added, broadcasting: 5\nI0428 12:12:03.064203 3406 log.go:172] (0xc000138790) Reply frame received for 5\nI0428 12:12:03.145092 3406 log.go:172] (0xc000138790) Data frame received for 5\nI0428 12:12:03.145277 3406 log.go:172] (0xc000024000) (5) Data frame handling\nI0428 12:12:03.145296 3406 log.go:172] (0xc000138790) Data frame received for 3\nI0428 12:12:03.145301 3406 log.go:172] (0xc0005f8dc0) (3) Data frame handling\nI0428 12:12:03.145307 3406 log.go:172] (0xc0005f8dc0) (3) Data frame sent\nI0428 12:12:03.145312 3406 log.go:172] (0xc000138790) Data frame received for 3\nI0428 12:12:03.145316 3406 log.go:172] (0xc0005f8dc0) (3) Data frame handling\nI0428 12:12:03.146870 3406 log.go:172] (0xc000138790) Data frame received for 1\nI0428 12:12:03.146887 3406 log.go:172] (0xc000738640) (1) Data frame handling\nI0428 12:12:03.146896 3406 log.go:172] (0xc000738640) (1) Data frame sent\nI0428 12:12:03.146921 3406 log.go:172] (0xc000138790) (0xc000738640) Stream removed, broadcasting: 1\nI0428 12:12:03.147096 3406 log.go:172] (0xc000138790) (0xc000738640) Stream removed, broadcasting: 1\nI0428 12:12:03.147112 3406 log.go:172] (0xc000138790) (0xc0005f8dc0) Stream removed, broadcasting: 3\nI0428 12:12:03.147121 3406 log.go:172] (0xc000138790) (0xc000024000) Stream removed, broadcasting: 5\n" Apr 28 12:12:03.150: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 12:12:03.150: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 12:12:03.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2km92 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 12:12:03.365: INFO: stderr: "I0428 12:12:03.275617 3429 log.go:172] (0xc000830160) (0xc000520d20) Create stream\nI0428 12:12:03.275691 3429 log.go:172] (0xc000830160) (0xc000520d20) Stream added, broadcasting: 1\nI0428 12:12:03.277966 3429 log.go:172] (0xc000830160) Reply frame received for 1\nI0428 12:12:03.278024 3429 log.go:172] (0xc000830160) (0xc0007fe000) Create stream\nI0428 12:12:03.278050 3429 log.go:172] (0xc000830160) (0xc0007fe000) Stream added, broadcasting: 3\nI0428 12:12:03.279042 3429 log.go:172] (0xc000830160) Reply frame received for 3\nI0428 12:12:03.279072 3429 log.go:172] (0xc000830160) (0xc0007fe0a0) Create stream\nI0428 12:12:03.279084 3429 log.go:172] (0xc000830160) (0xc0007fe0a0) Stream added, broadcasting: 5\nI0428 12:12:03.279951 3429 log.go:172] (0xc000830160) Reply frame received for 5\nI0428 12:12:03.358344 3429 log.go:172] (0xc000830160) Data frame received for 5\nI0428 12:12:03.358375 3429 log.go:172] (0xc0007fe0a0) (5) Data frame handling\nI0428 12:12:03.358391 3429 log.go:172] (0xc0007fe0a0) (5) Data frame sent\nI0428 12:12:03.358404 3429 log.go:172] (0xc000830160) Data frame received for 5\nI0428 12:12:03.358411 3429 log.go:172] (0xc0007fe0a0) (5) Data frame handling\nI0428 12:12:03.358436 3429 log.go:172] (0xc000830160) Data frame received for 3\nI0428 12:12:03.358446 3429 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0428 12:12:03.358459 3429 log.go:172] (0xc0007fe000) (3) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0428 12:12:03.358469 3429 log.go:172] (0xc000830160) Data frame received for 3\nI0428 12:12:03.358534 3429 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0428 12:12:03.360257 3429 log.go:172] (0xc000830160) Data frame received for 1\nI0428 12:12:03.360309 3429 log.go:172] (0xc000520d20) (1) Data frame handling\nI0428 12:12:03.360348 3429 log.go:172] (0xc000520d20) (1) Data frame sent\nI0428 12:12:03.360382 3429 log.go:172] (0xc000830160) (0xc000520d20) Stream removed, broadcasting: 1\nI0428 12:12:03.360421 3429 log.go:172] (0xc000830160) Go away received\nI0428 12:12:03.360594 3429 log.go:172] (0xc000830160) (0xc000520d20) Stream removed, broadcasting: 1\nI0428 12:12:03.360617 3429 log.go:172] (0xc000830160) (0xc0007fe000) Stream removed, broadcasting: 3\nI0428 12:12:03.360628 3429 log.go:172] (0xc000830160) (0xc0007fe0a0) Stream removed, broadcasting: 5\n" Apr 28 12:12:03.366: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 12:12:03.366: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 12:12:03.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2km92 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 12:12:03.566: INFO: stderr: "I0428 12:12:03.491807 3452 log.go:172] (0xc000138790) (0xc00060b540) Create stream\nI0428 12:12:03.491870 3452 log.go:172] (0xc000138790) (0xc00060b540) Stream added, broadcasting: 1\nI0428 12:12:03.496413 3452 log.go:172] (0xc000138790) Reply frame received for 1\nI0428 12:12:03.496454 3452 log.go:172] (0xc000138790) (0xc00060b5e0) Create stream\nI0428 12:12:03.496475 3452 log.go:172] (0xc000138790) (0xc00060b5e0) Stream added, broadcasting: 3\nI0428 12:12:03.497713 3452 log.go:172] (0xc000138790) Reply frame received for 3\nI0428 12:12:03.497740 3452 log.go:172] (0xc000138790) (0xc000528140) Create stream\nI0428 12:12:03.497747 3452 log.go:172] (0xc000138790) (0xc000528140) Stream added, broadcasting: 5\nI0428 12:12:03.498417 3452 log.go:172] (0xc000138790) Reply frame received for 5\nI0428 12:12:03.553751 3452 log.go:172] (0xc000138790) Data frame received for 3\nI0428 12:12:03.553801 3452 log.go:172] (0xc00060b5e0) (3) Data frame handling\nI0428 12:12:03.553814 3452 log.go:172] (0xc00060b5e0) (3) Data frame sent\nI0428 12:12:03.553821 3452 log.go:172] (0xc000138790) Data frame received for 3\nI0428 12:12:03.553825 3452 log.go:172] (0xc00060b5e0) (3) Data frame handling\nI0428 12:12:03.553851 3452 log.go:172] (0xc000138790) Data frame received for 5\nI0428 12:12:03.553862 3452 log.go:172] (0xc000528140) (5) Data frame handling\nI0428 12:12:03.553868 3452 log.go:172] (0xc000528140) (5) Data frame sent\nI0428 12:12:03.553873 3452 log.go:172] (0xc000138790) Data frame received for 5\nI0428 12:12:03.553877 3452 log.go:172] (0xc000528140) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0428 12:12:03.560893 3452 log.go:172] (0xc000138790) Data frame received for 1\nI0428 12:12:03.561024 3452 log.go:172] (0xc00060b540) (1) Data frame handling\nI0428 12:12:03.561253 3452 log.go:172] (0xc00060b540) (1) Data frame sent\nI0428 12:12:03.561561 3452 log.go:172] (0xc000138790) (0xc00060b540) Stream removed, broadcasting: 1\nI0428 12:12:03.561772 3452 log.go:172] (0xc000138790) Go away received\nI0428 12:12:03.561880 3452 log.go:172] (0xc000138790) (0xc00060b540) Stream removed, broadcasting: 1\nI0428 12:12:03.561993 3452 log.go:172] (0xc000138790) (0xc00060b5e0) Stream removed, broadcasting: 3\nI0428 12:12:03.562108 3452 log.go:172] (0xc000138790) (0xc000528140) Stream removed, broadcasting: 5\n" Apr 28 12:12:03.566: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 12:12:03.566: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 12:12:03.570: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 28 12:12:13.575: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 12:12:13.575: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 12:12:13.575: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 28 12:12:13.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2km92 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 12:12:13.801: INFO: stderr: "I0428 12:12:13.707930 3475 log.go:172] (0xc0007c42c0) (0xc00071a640) Create stream\nI0428 12:12:13.707982 3475 log.go:172] (0xc0007c42c0) (0xc00071a640) Stream added, broadcasting: 1\nI0428 12:12:13.710173 3475 log.go:172] (0xc0007c42c0) Reply frame received for 1\nI0428 12:12:13.710215 3475 log.go:172] (0xc0007c42c0) (0xc00071a6e0) Create stream\nI0428 12:12:13.710224 3475 log.go:172] (0xc0007c42c0) (0xc00071a6e0) Stream added, broadcasting: 3\nI0428 12:12:13.711206 3475 log.go:172] (0xc0007c42c0) Reply frame received for 3\nI0428 12:12:13.711238 3475 log.go:172] (0xc0007c42c0) (0xc0002a0c80) Create stream\nI0428 12:12:13.711251 3475 log.go:172] (0xc0007c42c0) (0xc0002a0c80) Stream added, broadcasting: 5\nI0428 12:12:13.712176 3475 log.go:172] (0xc0007c42c0) Reply frame received for 5\nI0428 12:12:13.795591 3475 log.go:172] (0xc0007c42c0) Data frame received for 5\nI0428 12:12:13.795626 3475 log.go:172] (0xc0002a0c80) (5) Data frame handling\nI0428 12:12:13.795674 3475 log.go:172] (0xc0007c42c0) Data frame received for 3\nI0428 12:12:13.795721 3475 log.go:172] (0xc00071a6e0) (3) Data frame handling\nI0428 12:12:13.795741 3475 log.go:172] (0xc00071a6e0) (3) Data frame sent\nI0428 12:12:13.795767 3475 log.go:172] (0xc0007c42c0) Data frame received for 3\nI0428 12:12:13.795778 3475 log.go:172] (0xc00071a6e0) (3) Data frame handling\nI0428 12:12:13.797451 3475 log.go:172] (0xc0007c42c0) Data frame received for 1\nI0428 12:12:13.797486 3475 log.go:172] (0xc00071a640) (1) Data frame handling\nI0428 12:12:13.797521 3475 log.go:172] (0xc00071a640) (1) Data frame sent\nI0428 12:12:13.797547 3475 log.go:172] (0xc0007c42c0) (0xc00071a640) Stream removed, broadcasting: 1\nI0428 12:12:13.797581 3475 log.go:172] (0xc0007c42c0) Go away received\nI0428 12:12:13.797828 3475 log.go:172] (0xc0007c42c0) (0xc00071a640) Stream removed, broadcasting: 1\nI0428 12:12:13.797861 3475 log.go:172] (0xc0007c42c0) (0xc00071a6e0) Stream removed, broadcasting: 3\nI0428 12:12:13.797882 3475 log.go:172] (0xc0007c42c0) (0xc0002a0c80) Stream removed, broadcasting: 5\n" Apr 28 12:12:13.801: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 12:12:13.801: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 12:12:13.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2km92 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 12:12:14.038: INFO: stderr: "I0428 12:12:13.938572 3498 log.go:172] (0xc0003c82c0) (0xc000624c80) Create stream\nI0428 12:12:13.938647 3498 log.go:172] (0xc0003c82c0) (0xc000624c80) Stream added, broadcasting: 1\nI0428 12:12:13.941422 3498 log.go:172] (0xc0003c82c0) Reply frame received for 1\nI0428 12:12:13.941478 3498 log.go:172] (0xc0003c82c0) (0xc00008e000) Create stream\nI0428 12:12:13.941493 3498 log.go:172] (0xc0003c82c0) (0xc00008e000) Stream added, broadcasting: 3\nI0428 12:12:13.942407 3498 log.go:172] (0xc0003c82c0) Reply frame received for 3\nI0428 12:12:13.942446 3498 log.go:172] (0xc0003c82c0) (0xc000624dc0) Create stream\nI0428 12:12:13.942457 3498 log.go:172] (0xc0003c82c0) (0xc000624dc0) Stream added, broadcasting: 5\nI0428 12:12:13.943392 3498 log.go:172] (0xc0003c82c0) Reply frame received for 5\nI0428 12:12:14.030276 3498 log.go:172] (0xc0003c82c0) Data frame received for 3\nI0428 12:12:14.030326 3498 log.go:172] (0xc00008e000) (3) Data frame handling\nI0428 12:12:14.030367 3498 log.go:172] (0xc00008e000) (3) Data frame sent\nI0428 12:12:14.030531 3498 log.go:172] (0xc0003c82c0) Data frame received for 3\nI0428 12:12:14.030569 3498 log.go:172] (0xc00008e000) (3) Data frame handling\nI0428 12:12:14.030634 3498 log.go:172] (0xc0003c82c0) Data frame received for 5\nI0428 12:12:14.030690 3498 log.go:172] (0xc000624dc0) (5) Data frame handling\nI0428 12:12:14.032288 3498 log.go:172] (0xc0003c82c0) Data frame received for 1\nI0428 12:12:14.032323 3498 log.go:172] (0xc000624c80) (1) Data frame handling\nI0428 12:12:14.032368 3498 log.go:172] (0xc000624c80) (1) Data frame sent\nI0428 12:12:14.032427 3498 log.go:172] (0xc0003c82c0) (0xc000624c80) Stream removed, broadcasting: 1\nI0428 12:12:14.032465 3498 log.go:172] (0xc0003c82c0) Go away received\nI0428 12:12:14.032700 3498 log.go:172] (0xc0003c82c0) (0xc000624c80) Stream removed, broadcasting: 1\nI0428 12:12:14.032723 3498 log.go:172] (0xc0003c82c0) (0xc00008e000) Stream removed, broadcasting: 3\nI0428 12:12:14.032737 3498 log.go:172] (0xc0003c82c0) (0xc000624dc0) Stream removed, broadcasting: 5\n" Apr 28 12:12:14.038: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 12:12:14.038: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 12:12:14.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2km92 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 12:12:14.283: INFO: stderr: "I0428 12:12:14.158163 3520 log.go:172] (0xc00013e840) (0xc00032d4a0) Create stream\nI0428 12:12:14.158222 3520 log.go:172] (0xc00013e840) (0xc00032d4a0) Stream added, broadcasting: 1\nI0428 12:12:14.160781 3520 log.go:172] (0xc00013e840) Reply frame received for 1\nI0428 12:12:14.160851 3520 log.go:172] (0xc00013e840) (0xc00060e000) Create stream\nI0428 12:12:14.160878 3520 log.go:172] (0xc00013e840) (0xc00060e000) Stream added, broadcasting: 3\nI0428 12:12:14.161984 3520 log.go:172] (0xc00013e840) Reply frame received for 3\nI0428 12:12:14.162036 3520 log.go:172] (0xc00013e840) (0xc00060e0a0) Create stream\nI0428 12:12:14.162047 3520 log.go:172] (0xc00013e840) (0xc00060e0a0) Stream added, broadcasting: 5\nI0428 12:12:14.162955 3520 log.go:172] (0xc00013e840) Reply frame received for 5\nI0428 12:12:14.275823 3520 log.go:172] (0xc00013e840) Data frame received for 3\nI0428 12:12:14.275866 3520 log.go:172] (0xc00060e000) (3) Data frame handling\nI0428 12:12:14.275905 3520 log.go:172] (0xc00060e000) (3) Data frame sent\nI0428 12:12:14.276131 3520 log.go:172] (0xc00013e840) Data frame received for 3\nI0428 12:12:14.276170 3520 log.go:172] (0xc00060e000) (3) Data frame handling\nI0428 12:12:14.276195 3520 log.go:172] (0xc00013e840) Data frame received for 5\nI0428 12:12:14.276207 3520 log.go:172] (0xc00060e0a0) (5) Data frame handling\nI0428 12:12:14.277936 3520 log.go:172] (0xc00013e840) Data frame received for 1\nI0428 12:12:14.277970 3520 log.go:172] (0xc00032d4a0) (1) Data frame handling\nI0428 12:12:14.277999 3520 log.go:172] (0xc00032d4a0) (1) Data frame sent\nI0428 12:12:14.278015 3520 log.go:172] (0xc00013e840) (0xc00032d4a0) Stream removed, broadcasting: 1\nI0428 12:12:14.278032 3520 log.go:172] (0xc00013e840) Go away received\nI0428 12:12:14.278326 3520 log.go:172] (0xc00013e840) (0xc00032d4a0) Stream removed, broadcasting: 1\nI0428 12:12:14.278359 3520 log.go:172] (0xc00013e840) (0xc00060e000) Stream removed, broadcasting: 3\nI0428 12:12:14.278375 3520 log.go:172] (0xc00013e840) (0xc00060e0a0) Stream removed, broadcasting: 5\n" Apr 28 12:12:14.283: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 12:12:14.283: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 12:12:14.283: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 12:12:14.287: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 28 12:12:24.296: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 12:12:24.296: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 28 12:12:24.296: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 28 12:12:24.314: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 12:12:24.314: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC }] Apr 28 12:12:24.314: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:24.314: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:24.314: INFO: Apr 28 12:12:24.314: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 12:12:25.320: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 12:12:25.320: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC }] Apr 28 12:12:25.320: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:25.320: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:25.320: INFO: Apr 28 12:12:25.320: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 12:12:26.325: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 12:12:26.325: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC }] Apr 28 12:12:26.325: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:26.325: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:26.325: INFO: Apr 28 12:12:26.326: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 12:12:27.331: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 12:12:27.331: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC }] Apr 28 12:12:27.331: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:27.331: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:27.331: INFO: Apr 28 12:12:27.331: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 12:12:28.337: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 12:12:28.337: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC }] Apr 28 12:12:28.337: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:28.337: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:28.337: INFO: Apr 28 12:12:28.337: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 12:12:29.342: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 12:12:29.342: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC }] Apr 28 12:12:29.343: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:29.343: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:29.343: INFO: Apr 28 12:12:29.343: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 12:12:30.353: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 12:12:30.354: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:32 +0000 UTC }] Apr 28 12:12:30.354: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:30.354: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:30.354: INFO: Apr 28 12:12:30.355: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 12:12:31.358: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 12:12:31.358: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:12:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:11:52 +0000 UTC }] Apr 28 12:12:31.358: INFO: Apr 28 12:12:31.358: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 28 12:12:32.362: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.945811664s Apr 28 12:12:33.382: INFO: Verifying statefulset ss doesn't scale past 0 for another 942.184584ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-2km92 Apr 28 12:12:34.387: INFO: Scaling statefulset ss to 0 Apr 28 12:12:34.397: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Apr 28 12:12:34.399: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2km92 Apr 28 12:12:34.403: INFO: Scaling statefulset ss to 0 Apr 28 12:12:34.411: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 12:12:34.414: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:12:34.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2km92" for this suite. Apr 28 12:12:40.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:12:40.513: INFO: namespace: e2e-tests-statefulset-2km92, resource: bindings, ignored listing per whitelist Apr 28 12:12:40.575: INFO: namespace e2e-tests-statefulset-2km92 deletion completed in 6.120589356s • [SLOW TEST:68.282 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:12:40.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 12:12:40.672: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e9f9104-8949-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-flps4" to be "success or failure" Apr 28 12:12:40.686: INFO: Pod "downwardapi-volume-8e9f9104-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.201237ms Apr 28 12:12:42.691: INFO: Pod "downwardapi-volume-8e9f9104-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01839039s Apr 28 12:12:44.695: INFO: Pod "downwardapi-volume-8e9f9104-8949-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023064694s STEP: Saw pod success Apr 28 12:12:44.695: INFO: Pod "downwardapi-volume-8e9f9104-8949-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:12:44.699: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-8e9f9104-8949-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 12:12:44.768: INFO: Waiting for pod downwardapi-volume-8e9f9104-8949-11ea-80e8-0242ac11000f to disappear Apr 28 12:12:44.772: INFO: Pod downwardapi-volume-8e9f9104-8949-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:12:44.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-flps4" for this suite. Apr 28 12:12:50.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:12:50.866: INFO: namespace: e2e-tests-projected-flps4, resource: bindings, ignored listing per whitelist Apr 28 12:12:50.880: INFO: namespace e2e-tests-projected-flps4 deletion completed in 6.104176235s • [SLOW TEST:10.304 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:12:50.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-94c2ea20-8949-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 12:12:50.986: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-94c54d3c-8949-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-b9j7f" to be "success or failure" Apr 28 12:12:50.990: INFO: Pod "pod-projected-configmaps-94c54d3c-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.46979ms Apr 28 12:12:52.995: INFO: Pod "pod-projected-configmaps-94c54d3c-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008865314s Apr 28 12:12:54.999: INFO: Pod "pod-projected-configmaps-94c54d3c-8949-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013454362s STEP: Saw pod success Apr 28 12:12:54.999: INFO: Pod "pod-projected-configmaps-94c54d3c-8949-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:12:55.002: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-94c54d3c-8949-11ea-80e8-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Apr 28 12:12:55.040: INFO: Waiting for pod pod-projected-configmaps-94c54d3c-8949-11ea-80e8-0242ac11000f to disappear Apr 28 12:12:55.059: INFO: Pod pod-projected-configmaps-94c54d3c-8949-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:12:55.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b9j7f" for this suite. Apr 28 12:13:01.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:13:01.154: INFO: namespace: e2e-tests-projected-b9j7f, resource: bindings, ignored listing per whitelist Apr 28 12:13:01.156: INFO: namespace e2e-tests-projected-b9j7f deletion completed in 6.094889113s • [SLOW TEST:10.277 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:13:01.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 12:13:01.317: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 28 12:13:06.322: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 28 12:13:06.322: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Apr 28 12:13:06.340: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-22vdn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-22vdn/deployments/test-cleanup-deployment,UID:9deb5ade-8949-11ea-99e8-0242ac110002,ResourceVersion:7649216,Generation:1,CreationTimestamp:2020-04-28 12:13:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 28 12:13:06.347: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Apr 28 12:13:06.347: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 28 12:13:06.347: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-22vdn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-22vdn/replicasets/test-cleanup-controller,UID:9ae8a791-8949-11ea-99e8-0242ac110002,ResourceVersion:7649217,Generation:1,CreationTimestamp:2020-04-28 12:13:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 9deb5ade-8949-11ea-99e8-0242ac110002 0xc001143237 0xc001143238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 28 12:13:06.354: INFO: Pod "test-cleanup-controller-ltpds" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-ltpds,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-22vdn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-22vdn/pods/test-cleanup-controller-ltpds,UID:9aef6a5c-8949-11ea-99e8-0242ac110002,ResourceVersion:7649210,Generation:0,CreationTimestamp:2020-04-28 12:13:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 9ae8a791-8949-11ea-99e8-0242ac110002 0xc0011438b7 0xc0011438b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kjmlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kjmlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kjmlz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001143930} {node.kubernetes.io/unreachable Exists NoExecute 0xc001143950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:13:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:13:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:13:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:13:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.230,StartTime:2020-04-28 12:13:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 12:13:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f5c0ca7a10bfbc6e861348a8a4aa60de8418deca84c94b07f8a02bd3914f1344}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:13:06.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-22vdn" for this suite. Apr 28 12:13:12.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:13:12.497: INFO: namespace: e2e-tests-deployment-22vdn, resource: bindings, ignored listing per whitelist Apr 28 12:13:12.544: INFO: namespace e2e-tests-deployment-22vdn deletion completed in 6.123137577s • [SLOW TEST:11.388 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:13:12.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-a1b2716e-8949-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 12:13:12.701: INFO: Waiting up to 5m0s for pod "pod-secrets-a1b2dda3-8949-11ea-80e8-0242ac11000f" in namespace "e2e-tests-secrets-wcxwt" to be "success or failure" Apr 28 12:13:12.722: INFO: Pod "pod-secrets-a1b2dda3-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.503083ms Apr 28 12:13:14.772: INFO: Pod "pod-secrets-a1b2dda3-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07184053s Apr 28 12:13:16.777: INFO: Pod "pod-secrets-a1b2dda3-8949-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076148274s STEP: Saw pod success Apr 28 12:13:16.777: INFO: Pod "pod-secrets-a1b2dda3-8949-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:13:16.780: INFO: Trying to get logs from node hunter-worker pod pod-secrets-a1b2dda3-8949-11ea-80e8-0242ac11000f container secret-volume-test: STEP: delete the pod Apr 28 12:13:16.828: INFO: Waiting for pod pod-secrets-a1b2dda3-8949-11ea-80e8-0242ac11000f to disappear Apr 28 12:13:16.842: INFO: Pod pod-secrets-a1b2dda3-8949-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:13:16.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wcxwt" for this suite. Apr 28 12:13:22.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:13:22.932: INFO: namespace: e2e-tests-secrets-wcxwt, resource: bindings, ignored listing per whitelist Apr 28 12:13:22.940: INFO: namespace e2e-tests-secrets-wcxwt deletion completed in 6.095144089s • [SLOW TEST:10.395 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:13:22.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:13:27.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-6rr9p" for this suite. Apr 28 12:14:17.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:14:17.164: INFO: namespace: e2e-tests-kubelet-test-6rr9p, resource: bindings, ignored listing per whitelist Apr 28 12:14:17.200: INFO: namespace e2e-tests-kubelet-test-6rr9p deletion completed in 50.097791234s • [SLOW TEST:54.260 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:14:17.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-c83e9271-8949-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 12:14:17.392: INFO: Waiting up to 5m0s for pod "pod-secrets-c843d87e-8949-11ea-80e8-0242ac11000f" in namespace "e2e-tests-secrets-zl9jz" to be "success or failure" Apr 28 12:14:17.443: INFO: Pod "pod-secrets-c843d87e-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 51.239525ms Apr 28 12:14:19.450: INFO: Pod "pod-secrets-c843d87e-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057781426s Apr 28 12:14:21.454: INFO: Pod "pod-secrets-c843d87e-8949-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061787597s STEP: Saw pod success Apr 28 12:14:21.454: INFO: Pod "pod-secrets-c843d87e-8949-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:14:21.457: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-c843d87e-8949-11ea-80e8-0242ac11000f container secret-volume-test: STEP: delete the pod Apr 28 12:14:21.475: INFO: Waiting for pod pod-secrets-c843d87e-8949-11ea-80e8-0242ac11000f to disappear Apr 28 12:14:21.486: INFO: Pod pod-secrets-c843d87e-8949-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:14:21.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zl9jz" for this suite. Apr 28 12:14:27.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:14:27.558: INFO: namespace: e2e-tests-secrets-zl9jz, resource: bindings, ignored listing per whitelist Apr 28 12:14:27.575: INFO: namespace e2e-tests-secrets-zl9jz deletion completed in 6.086024672s • [SLOW TEST:10.375 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:14:27.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ce65c647-8949-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume secrets Apr 28 12:14:27.690: INFO: Waiting up to 5m0s for pod "pod-secrets-ce67f9b8-8949-11ea-80e8-0242ac11000f" in namespace "e2e-tests-secrets-qrxnd" to be "success or failure" Apr 28 12:14:27.731: INFO: Pod "pod-secrets-ce67f9b8-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 41.277734ms Apr 28 12:14:29.735: INFO: Pod "pod-secrets-ce67f9b8-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045615031s Apr 28 12:14:31.740: INFO: Pod "pod-secrets-ce67f9b8-8949-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049990193s STEP: Saw pod success Apr 28 12:14:31.740: INFO: Pod "pod-secrets-ce67f9b8-8949-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:14:31.743: INFO: Trying to get logs from node hunter-worker pod pod-secrets-ce67f9b8-8949-11ea-80e8-0242ac11000f container secret-volume-test: STEP: delete the pod Apr 28 12:14:31.781: INFO: Waiting for pod pod-secrets-ce67f9b8-8949-11ea-80e8-0242ac11000f to disappear Apr 28 12:14:31.803: INFO: Pod pod-secrets-ce67f9b8-8949-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:14:31.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qrxnd" for this suite. Apr 28 12:14:37.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:14:37.855: INFO: namespace: e2e-tests-secrets-qrxnd, resource: bindings, ignored listing per whitelist Apr 28 12:14:37.885: INFO: namespace e2e-tests-secrets-qrxnd deletion completed in 6.078046701s • [SLOW TEST:10.309 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:14:37.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 12:14:37.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-dp7mr' Apr 28 12:14:38.065: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 28 12:14:38.065: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 28 12:14:38.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-dp7mr' Apr 28 12:14:38.190: INFO: stderr: "" Apr 28 12:14:38.190: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:14:38.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dp7mr" for this suite. Apr 28 12:15:00.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:15:00.334: INFO: namespace: e2e-tests-kubectl-dp7mr, resource: bindings, ignored listing per whitelist Apr 28 12:15:00.338: INFO: namespace e2e-tests-kubectl-dp7mr deletion completed in 22.145182227s • [SLOW TEST:22.453 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:15:00.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Apr 28 12:15:00.417: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 12:15:00.455: INFO: Waiting for terminating namespaces to be deleted... Apr 28 12:15:00.458: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Apr 28 12:15:00.463: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Apr 28 12:15:00.463: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 12:15:00.463: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Apr 28 12:15:00.463: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 12:15:00.463: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Apr 28 12:15:00.463: INFO: Container coredns ready: true, restart count 0 Apr 28 12:15:00.463: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Apr 28 12:15:00.471: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Apr 28 12:15:00.471: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 12:15:00.471: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Apr 28 12:15:00.471: INFO: Container coredns ready: true, restart count 0 Apr 28 12:15:00.471: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Apr 28 12:15:00.471: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Apr 28 12:15:00.585: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Apr 28 12:15:00.585: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Apr 28 12:15:00.585: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Apr 28 12:15:00.585: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Apr 28 12:15:00.585: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Apr 28 12:15:00.585: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-e2058c5e-8949-11ea-80e8-0242ac11000f.1609faccac7f032a], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-r9892/filler-pod-e2058c5e-8949-11ea-80e8-0242ac11000f to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e2058c5e-8949-11ea-80e8-0242ac11000f.1609faccf7bcfe94], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e2058c5e-8949-11ea-80e8-0242ac11000f.1609facd3e800c26], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e2058c5e-8949-11ea-80e8-0242ac11000f.1609facd567af0e4], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e2068f68-8949-11ea-80e8-0242ac11000f.1609faccadb8c906], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-r9892/filler-pod-e2068f68-8949-11ea-80e8-0242ac11000f to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e2068f68-8949-11ea-80e8-0242ac11000f.1609facd337f996f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e2068f68-8949-11ea-80e8-0242ac11000f.1609facd7152b402], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e2068f68-8949-11ea-80e8-0242ac11000f.1609facd80393a29], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.1609facd9d292109], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:15:05.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-r9892" for this suite. Apr 28 12:15:11.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:15:11.826: INFO: namespace: e2e-tests-sched-pred-r9892, resource: bindings, ignored listing per whitelist Apr 28 12:15:11.883: INFO: namespace e2e-tests-sched-pred-r9892 deletion completed in 6.093159286s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:11.545 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:15:11.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 12:15:12.070: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8dcf472-8949-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-mc5d4" to be "success or failure" Apr 28 12:15:12.115: INFO: Pod "downwardapi-volume-e8dcf472-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 45.643245ms Apr 28 12:15:14.119: INFO: Pod "downwardapi-volume-e8dcf472-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049453326s Apr 28 12:15:16.123: INFO: Pod "downwardapi-volume-e8dcf472-8949-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053645886s STEP: Saw pod success Apr 28 12:15:16.123: INFO: Pod "downwardapi-volume-e8dcf472-8949-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:15:16.127: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-e8dcf472-8949-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 12:15:16.148: INFO: Waiting for pod downwardapi-volume-e8dcf472-8949-11ea-80e8-0242ac11000f to disappear Apr 28 12:15:16.164: INFO: Pod downwardapi-volume-e8dcf472-8949-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:15:16.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mc5d4" for this suite. Apr 28 12:15:22.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:15:22.235: INFO: namespace: e2e-tests-projected-mc5d4, resource: bindings, ignored listing per whitelist Apr 28 12:15:22.288: INFO: namespace e2e-tests-projected-mc5d4 deletion completed in 6.12116233s • [SLOW TEST:10.405 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:15:22.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 12:15:22.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Apr 28 12:15:22.491: INFO: stderr: "" Apr 28 12:15:22.491: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:25:50Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Apr 28 12:15:22.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vwp56' Apr 28 12:15:22.773: INFO: stderr: "" Apr 28 12:15:22.773: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 28 12:15:22.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vwp56' Apr 28 12:15:23.081: INFO: stderr: "" Apr 28 12:15:23.081: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 28 12:15:24.086: INFO: Selector matched 1 pods for map[app:redis] Apr 28 12:15:24.086: INFO: Found 0 / 1 Apr 28 12:15:25.087: INFO: Selector matched 1 pods for map[app:redis] Apr 28 12:15:25.087: INFO: Found 0 / 1 Apr 28 12:15:26.086: INFO: Selector matched 1 pods for map[app:redis] Apr 28 12:15:26.086: INFO: Found 1 / 1 Apr 28 12:15:26.086: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 28 12:15:26.089: INFO: Selector matched 1 pods for map[app:redis] Apr 28 12:15:26.089: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 28 12:15:26.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-bt928 --namespace=e2e-tests-kubectl-vwp56' Apr 28 12:15:26.211: INFO: stderr: "" Apr 28 12:15:26.211: INFO: stdout: "Name: redis-master-bt928\nNamespace: e2e-tests-kubectl-vwp56\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Tue, 28 Apr 2020 12:15:22 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.112\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://b2501264b93de8339e39de24f8f73b1e34bcae9fafe415deeea521ceb2d684ed\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 28 Apr 2020 12:15:25 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-wnpgt (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-wnpgt:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-wnpgt\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-vwp56/redis-master-bt928 to hunter-worker\n Normal Pulled 2s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" Apr 28 12:15:26.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-vwp56' Apr 28 12:15:26.338: INFO: stderr: "" Apr 28 12:15:26.338: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-vwp56\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-bt928\n" Apr 28 12:15:26.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-vwp56' Apr 28 12:15:26.460: INFO: stderr: "" Apr 28 12:15:26.460: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-vwp56\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.103.4.73\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.112:6379\nSession Affinity: None\nEvents: \n" Apr 28 12:15:26.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Apr 28 12:15:26.602: INFO: stderr: "" Apr 28 12:15:26.602: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 28 Apr 2020 12:15:23 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 28 Apr 2020 12:15:23 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 28 Apr 2020 12:15:23 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 28 Apr 2020 12:15:23 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 43d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 43d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 28 12:15:26.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-vwp56' Apr 28 12:15:26.724: INFO: stderr: "" Apr 28 12:15:26.724: INFO: stdout: "Name: e2e-tests-kubectl-vwp56\nLabels: e2e-framework=kubectl\n e2e-run=a00d4fe0-893d-11ea-80e8-0242ac11000f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:15:26.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vwp56" for this suite. Apr 28 12:15:48.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:15:48.781: INFO: namespace: e2e-tests-kubectl-vwp56, resource: bindings, ignored listing per whitelist Apr 28 12:15:48.811: INFO: namespace e2e-tests-kubectl-vwp56 deletion completed in 22.082375191s • [SLOW TEST:26.522 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:15:48.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-fed5af05-8949-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 12:15:48.948: INFO: Waiting up to 5m0s for pod "pod-configmaps-fed65458-8949-11ea-80e8-0242ac11000f" in namespace "e2e-tests-configmap-62b4h" to be "success or failure" Apr 28 12:15:48.960: INFO: Pod "pod-configmaps-fed65458-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.298112ms Apr 28 12:15:50.964: INFO: Pod "pod-configmaps-fed65458-8949-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015895685s Apr 28 12:15:52.969: INFO: Pod "pod-configmaps-fed65458-8949-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020305456s STEP: Saw pod success Apr 28 12:15:52.969: INFO: Pod "pod-configmaps-fed65458-8949-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:15:52.972: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-fed65458-8949-11ea-80e8-0242ac11000f container configmap-volume-test: STEP: delete the pod Apr 28 12:15:52.991: INFO: Waiting for pod pod-configmaps-fed65458-8949-11ea-80e8-0242ac11000f to disappear Apr 28 12:15:52.995: INFO: Pod pod-configmaps-fed65458-8949-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:15:52.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-62b4h" for this suite. Apr 28 12:15:59.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:15:59.082: INFO: namespace: e2e-tests-configmap-62b4h, resource: bindings, ignored listing per whitelist Apr 28 12:15:59.121: INFO: namespace e2e-tests-configmap-62b4h deletion completed in 6.122620731s • [SLOW TEST:10.310 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:15:59.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Apr 28 12:16:03.274: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-04fd0804-894a-11ea-80e8-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-pods-9l54t", SelfLink:"/api/v1/namespaces/e2e-tests-pods-9l54t/pods/pod-submit-remove-04fd0804-894a-11ea-80e8-0242ac11000f", UID:"04fe0e6e-894a-11ea-99e8-0242ac110002", ResourceVersion:"7649876", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723672959, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"249826409"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zfx7j", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0015e0400), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zfx7j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ef89a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001dedd40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ef8a00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ef8a20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001ef8a28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001ef8a2c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723672959, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723672961, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723672961, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723672959, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.238", StartTime:(*v1.Time)(0xc001026020), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001026040), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://34c4ad48e8374b067b7b727a5888eb8b8b018cf85a142825ad332f9f351e0b1b"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 28 12:16:08.288: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:16:08.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-9l54t" for this suite. Apr 28 12:16:14.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:16:14.368: INFO: namespace: e2e-tests-pods-9l54t, resource: bindings, ignored listing per whitelist Apr 28 12:16:14.389: INFO: namespace e2e-tests-pods-9l54t deletion completed in 6.092493345s • [SLOW TEST:15.267 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:16:14.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 12:16:14.507: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e13c6b5-894a-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-xv447" to be "success or failure" Apr 28 12:16:14.519: INFO: Pod "downwardapi-volume-0e13c6b5-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.466245ms Apr 28 12:16:16.612: INFO: Pod "downwardapi-volume-0e13c6b5-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105423227s Apr 28 12:16:18.617: INFO: Pod "downwardapi-volume-0e13c6b5-894a-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11053899s STEP: Saw pod success Apr 28 12:16:18.617: INFO: Pod "downwardapi-volume-0e13c6b5-894a-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:16:18.620: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-0e13c6b5-894a-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 12:16:18.801: INFO: Waiting for pod downwardapi-volume-0e13c6b5-894a-11ea-80e8-0242ac11000f to disappear Apr 28 12:16:18.828: INFO: Pod downwardapi-volume-0e13c6b5-894a-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:16:18.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xv447" for this suite. Apr 28 12:16:24.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:16:24.883: INFO: namespace: e2e-tests-projected-xv447, resource: bindings, ignored listing per whitelist Apr 28 12:16:24.915: INFO: namespace e2e-tests-projected-xv447 deletion completed in 6.083733887s • [SLOW TEST:10.526 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:16:24.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-14536a06-894a-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 12:16:25.018: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-14568b15-894a-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-pdv6g" to be "success or failure" Apr 28 12:16:25.021: INFO: Pod "pod-projected-configmaps-14568b15-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.108272ms Apr 28 12:16:27.026: INFO: Pod "pod-projected-configmaps-14568b15-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007582402s Apr 28 12:16:29.031: INFO: Pod "pod-projected-configmaps-14568b15-894a-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012248177s STEP: Saw pod success Apr 28 12:16:29.031: INFO: Pod "pod-projected-configmaps-14568b15-894a-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:16:29.034: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-14568b15-894a-11ea-80e8-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Apr 28 12:16:29.053: INFO: Waiting for pod pod-projected-configmaps-14568b15-894a-11ea-80e8-0242ac11000f to disappear Apr 28 12:16:29.058: INFO: Pod pod-projected-configmaps-14568b15-894a-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:16:29.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pdv6g" for this suite. Apr 28 12:16:35.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:16:35.101: INFO: namespace: e2e-tests-projected-pdv6g, resource: bindings, ignored listing per whitelist Apr 28 12:16:35.161: INFO: namespace e2e-tests-projected-pdv6g deletion completed in 6.100564048s • [SLOW TEST:10.246 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:16:35.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 12:16:35.288: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:16:36.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-89xjv" for this suite. Apr 28 12:16:42.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:16:42.381: INFO: namespace: e2e-tests-custom-resource-definition-89xjv, resource: bindings, ignored listing per whitelist Apr 28 12:16:42.447: INFO: namespace e2e-tests-custom-resource-definition-89xjv deletion completed in 6.108042426s • [SLOW TEST:7.285 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:16:42.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Apr 28 12:16:42.574: INFO: Waiting up to 5m0s for pod "downward-api-1ecc63c3-894a-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-tn9qj" to be "success or failure" Apr 28 12:16:42.583: INFO: Pod "downward-api-1ecc63c3-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.114998ms Apr 28 12:16:44.587: INFO: Pod "downward-api-1ecc63c3-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013010252s Apr 28 12:16:46.591: INFO: Pod "downward-api-1ecc63c3-894a-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016351703s STEP: Saw pod success Apr 28 12:16:46.591: INFO: Pod "downward-api-1ecc63c3-894a-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:16:46.593: INFO: Trying to get logs from node hunter-worker pod downward-api-1ecc63c3-894a-11ea-80e8-0242ac11000f container dapi-container: STEP: delete the pod Apr 28 12:16:46.621: INFO: Waiting for pod downward-api-1ecc63c3-894a-11ea-80e8-0242ac11000f to disappear Apr 28 12:16:46.673: INFO: Pod downward-api-1ecc63c3-894a-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:16:46.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tn9qj" for this suite. Apr 28 12:16:52.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:16:52.780: INFO: namespace: e2e-tests-downward-api-tn9qj, resource: bindings, ignored listing per whitelist Apr 28 12:16:52.783: INFO: namespace e2e-tests-downward-api-tn9qj deletion completed in 6.105331957s • [SLOW TEST:10.335 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:16:52.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 12:16:52.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 28 12:16:52.987: INFO: stderr: "" Apr 28 12:16:52.988: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:25:50Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:16:52.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q7sls" for this suite. Apr 28 12:16:59.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:16:59.090: INFO: namespace: e2e-tests-kubectl-q7sls, resource: bindings, ignored listing per whitelist Apr 28 12:16:59.111: INFO: namespace e2e-tests-kubectl-q7sls deletion completed in 6.118614635s • [SLOW TEST:6.328 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:16:59.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 12:16:59.216: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28b96354-894a-11ea-80e8-0242ac11000f" in namespace "e2e-tests-projected-fnsq9" to be "success or failure" Apr 28 12:16:59.229: INFO: Pod "downwardapi-volume-28b96354-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.773616ms Apr 28 12:17:01.232: INFO: Pod "downwardapi-volume-28b96354-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016632606s Apr 28 12:17:03.237: INFO: Pod "downwardapi-volume-28b96354-894a-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021352262s STEP: Saw pod success Apr 28 12:17:03.237: INFO: Pod "downwardapi-volume-28b96354-894a-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:17:03.240: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-28b96354-894a-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 12:17:03.272: INFO: Waiting for pod downwardapi-volume-28b96354-894a-11ea-80e8-0242ac11000f to disappear Apr 28 12:17:03.286: INFO: Pod downwardapi-volume-28b96354-894a-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:17:03.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fnsq9" for this suite. Apr 28 12:17:09.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:17:09.313: INFO: namespace: e2e-tests-projected-fnsq9, resource: bindings, ignored listing per whitelist Apr 28 12:17:09.378: INFO: namespace e2e-tests-projected-fnsq9 deletion completed in 6.088464064s • [SLOW TEST:10.267 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:17:09.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-2ed4e285-894a-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 12:17:09.496: INFO: Waiting up to 5m0s for pod "pod-configmaps-2ed6ef51-894a-11ea-80e8-0242ac11000f" in namespace "e2e-tests-configmap-gtgqc" to be "success or failure" Apr 28 12:17:09.500: INFO: Pod "pod-configmaps-2ed6ef51-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.793592ms Apr 28 12:17:11.513: INFO: Pod "pod-configmaps-2ed6ef51-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016306421s Apr 28 12:17:13.516: INFO: Pod "pod-configmaps-2ed6ef51-894a-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019820221s STEP: Saw pod success Apr 28 12:17:13.516: INFO: Pod "pod-configmaps-2ed6ef51-894a-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:17:13.519: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-2ed6ef51-894a-11ea-80e8-0242ac11000f container configmap-volume-test: STEP: delete the pod Apr 28 12:17:13.537: INFO: Waiting for pod pod-configmaps-2ed6ef51-894a-11ea-80e8-0242ac11000f to disappear Apr 28 12:17:13.560: INFO: Pod pod-configmaps-2ed6ef51-894a-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:17:13.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gtgqc" for this suite. Apr 28 12:17:19.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:17:19.617: INFO: namespace: e2e-tests-configmap-gtgqc, resource: bindings, ignored listing per whitelist Apr 28 12:17:19.688: INFO: namespace e2e-tests-configmap-gtgqc deletion completed in 6.106038576s • [SLOW TEST:10.310 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:17:19.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Apr 28 12:17:19.802: INFO: Waiting up to 5m0s for pod "var-expansion-34fcc852-894a-11ea-80e8-0242ac11000f" in namespace "e2e-tests-var-expansion-7dfxj" to be "success or failure" Apr 28 12:17:19.805: INFO: Pod "var-expansion-34fcc852-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.294362ms Apr 28 12:17:21.896: INFO: Pod "var-expansion-34fcc852-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093574806s Apr 28 12:17:23.900: INFO: Pod "var-expansion-34fcc852-894a-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097488665s STEP: Saw pod success Apr 28 12:17:23.900: INFO: Pod "var-expansion-34fcc852-894a-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:17:23.903: INFO: Trying to get logs from node hunter-worker pod var-expansion-34fcc852-894a-11ea-80e8-0242ac11000f container dapi-container: STEP: delete the pod Apr 28 12:17:23.977: INFO: Waiting for pod var-expansion-34fcc852-894a-11ea-80e8-0242ac11000f to disappear Apr 28 12:17:24.099: INFO: Pod var-expansion-34fcc852-894a-11ea-80e8-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:17:24.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-7dfxj" for this suite. Apr 28 12:17:30.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:17:30.191: INFO: namespace: e2e-tests-var-expansion-7dfxj, resource: bindings, ignored listing per whitelist Apr 28 12:17:30.199: INFO: namespace e2e-tests-var-expansion-7dfxj deletion completed in 6.094478306s • [SLOW TEST:10.510 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:17:30.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 28 12:17:34.864: INFO: Successfully updated pod "pod-update-3b469f11-894a-11ea-80e8-0242ac11000f" STEP: verifying the updated pod is in kubernetes Apr 28 12:17:34.870: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:17:34.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-grwcj" for this suite. Apr 28 12:17:56.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:17:56.939: INFO: namespace: e2e-tests-pods-grwcj, resource: bindings, ignored listing per whitelist Apr 28 12:17:56.955: INFO: namespace e2e-tests-pods-grwcj deletion completed in 22.08255335s • [SLOW TEST:26.756 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:17:56.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Apr 28 12:17:57.108: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4b36889f-894a-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00115db32), BlockOwnerDeletion:(*bool)(0xc00115db33)}} Apr 28 12:17:57.137: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"4b3560f3-894a-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00115de02), BlockOwnerDeletion:(*bool)(0xc00115de03)}} Apr 28 12:17:57.144: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4b35f4fd-894a-11ea-99e8-0242ac110002", Controller:(*bool)(0xc002b0e4e2), BlockOwnerDeletion:(*bool)(0xc002b0e4e3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:18:02.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-p64xw" for this suite. Apr 28 12:18:08.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:18:08.287: INFO: namespace: e2e-tests-gc-p64xw, resource: bindings, ignored listing per whitelist Apr 28 12:18:08.289: INFO: namespace e2e-tests-gc-p64xw deletion completed in 6.090258527s • [SLOW TEST:11.334 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:18:08.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Apr 28 12:18:08.377: INFO: Waiting up to 5m0s for pod "var-expansion-51f1248f-894a-11ea-80e8-0242ac11000f" in namespace "e2e-tests-var-expansion-8t4kz" to be "success or failure" Apr 28 12:18:08.394: INFO: Pod "var-expansion-51f1248f-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.108476ms Apr 28 12:18:10.442: INFO: Pod "var-expansion-51f1248f-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064575842s Apr 28 12:18:12.446: INFO: Pod "var-expansion-51f1248f-894a-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069040248s STEP: Saw pod success Apr 28 12:18:12.446: INFO: Pod "var-expansion-51f1248f-894a-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:18:12.450: INFO: Trying to get logs from node hunter-worker pod var-expansion-51f1248f-894a-11ea-80e8-0242ac11000f container dapi-container: STEP: delete the pod Apr 28 12:18:12.761: INFO: Waiting for pod var-expansion-51f1248f-894a-11ea-80e8-0242ac11000f to disappear Apr 28 12:18:12.795: INFO: Pod var-expansion-51f1248f-894a-11ea-80e8-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:18:12.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-8t4kz" for this suite. Apr 28 12:18:18.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:18:18.887: INFO: namespace: e2e-tests-var-expansion-8t4kz, resource: bindings, ignored listing per whitelist Apr 28 12:18:18.928: INFO: namespace e2e-tests-var-expansion-8t4kz deletion completed in 6.129501253s • [SLOW TEST:10.639 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:18:18.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 28 12:18:19.025: INFO: Waiting up to 5m0s for pod "pod-584af1bd-894a-11ea-80e8-0242ac11000f" in namespace "e2e-tests-emptydir-84m8l" to be "success or failure" Apr 28 12:18:19.029: INFO: Pod "pod-584af1bd-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.379262ms Apr 28 12:18:21.033: INFO: Pod "pod-584af1bd-894a-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007598171s Apr 28 12:18:23.037: INFO: Pod "pod-584af1bd-894a-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011449097s STEP: Saw pod success Apr 28 12:18:23.037: INFO: Pod "pod-584af1bd-894a-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:18:23.039: INFO: Trying to get logs from node hunter-worker2 pod pod-584af1bd-894a-11ea-80e8-0242ac11000f container test-container: STEP: delete the pod Apr 28 12:18:23.056: INFO: Waiting for pod pod-584af1bd-894a-11ea-80e8-0242ac11000f to disappear Apr 28 12:18:23.059: INFO: Pod pod-584af1bd-894a-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:18:23.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-84m8l" for this suite. Apr 28 12:18:29.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:18:29.168: INFO: namespace: e2e-tests-emptydir-84m8l, resource: bindings, ignored listing per whitelist Apr 28 12:18:29.175: INFO: namespace e2e-tests-emptydir-84m8l deletion completed in 6.113323019s • [SLOW TEST:10.247 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:18:29.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:18:33.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-v5g2n" for this suite. Apr 28 12:18:39.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:18:39.463: INFO: namespace: e2e-tests-emptydir-wrapper-v5g2n, resource: bindings, ignored listing per whitelist Apr 28 12:18:39.528: INFO: namespace e2e-tests-emptydir-wrapper-v5g2n deletion completed in 6.08962986s • [SLOW TEST:10.352 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:18:39.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Apr 28 12:18:39.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-p6wtp run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 28 12:18:45.211: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0428 12:18:45.151564 3791 log.go:172] (0xc000154790) (0xc0009143c0) Create stream\nI0428 12:18:45.151602 3791 log.go:172] (0xc000154790) (0xc0009143c0) Stream added, broadcasting: 1\nI0428 12:18:45.154128 3791 log.go:172] (0xc000154790) Reply frame received for 1\nI0428 12:18:45.154163 3791 log.go:172] (0xc000154790) (0xc000914460) Create stream\nI0428 12:18:45.154171 3791 log.go:172] (0xc000154790) (0xc000914460) Stream added, broadcasting: 3\nI0428 12:18:45.155255 3791 log.go:172] (0xc000154790) Reply frame received for 3\nI0428 12:18:45.155314 3791 log.go:172] (0xc000154790) (0xc000772320) Create stream\nI0428 12:18:45.155338 3791 log.go:172] (0xc000154790) (0xc000772320) Stream added, broadcasting: 5\nI0428 12:18:45.156198 3791 log.go:172] (0xc000154790) Reply frame received for 5\nI0428 12:18:45.156237 3791 log.go:172] (0xc000154790) (0xc0007723c0) Create stream\nI0428 12:18:45.156250 3791 log.go:172] (0xc000154790) (0xc0007723c0) Stream added, broadcasting: 7\nI0428 12:18:45.157258 3791 log.go:172] (0xc000154790) Reply frame received for 7\nI0428 12:18:45.157407 3791 log.go:172] (0xc000914460) (3) Writing data frame\nI0428 12:18:45.157546 3791 log.go:172] (0xc000914460) (3) Writing data frame\nI0428 12:18:45.158216 3791 log.go:172] (0xc000154790) Data frame received for 5\nI0428 12:18:45.158228 3791 log.go:172] (0xc000772320) (5) Data frame handling\nI0428 12:18:45.158235 3791 log.go:172] (0xc000772320) (5) Data frame sent\nI0428 12:18:45.158988 3791 log.go:172] (0xc000154790) Data frame received for 5\nI0428 12:18:45.159004 3791 log.go:172] (0xc000772320) (5) Data frame handling\nI0428 12:18:45.159024 3791 log.go:172] (0xc000772320) (5) Data frame sent\nI0428 12:18:45.185002 3791 log.go:172] (0xc000154790) Data frame received for 7\nI0428 12:18:45.185045 3791 log.go:172] (0xc0007723c0) (7) Data frame handling\nI0428 12:18:45.185067 3791 log.go:172] (0xc000154790) Data frame received for 5\nI0428 12:18:45.185078 3791 log.go:172] (0xc000772320) (5) Data frame handling\nI0428 12:18:45.185639 3791 log.go:172] (0xc000154790) Data frame received for 1\nI0428 12:18:45.185698 3791 log.go:172] (0xc000154790) (0xc000914460) Stream removed, broadcasting: 3\nI0428 12:18:45.185748 3791 log.go:172] (0xc0009143c0) (1) Data frame handling\nI0428 12:18:45.185802 3791 log.go:172] (0xc0009143c0) (1) Data frame sent\nI0428 12:18:45.185835 3791 log.go:172] (0xc000154790) (0xc0009143c0) Stream removed, broadcasting: 1\nI0428 12:18:45.185858 3791 log.go:172] (0xc000154790) Go away received\nI0428 12:18:45.186013 3791 log.go:172] (0xc000154790) (0xc0009143c0) Stream removed, broadcasting: 1\nI0428 12:18:45.186035 3791 log.go:172] (0xc000154790) (0xc000914460) Stream removed, broadcasting: 3\nI0428 12:18:45.186046 3791 log.go:172] (0xc000154790) (0xc000772320) Stream removed, broadcasting: 5\nI0428 12:18:45.186058 3791 log.go:172] (0xc000154790) (0xc0007723c0) Stream removed, broadcasting: 7\n" Apr 28 12:18:45.211: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:18:47.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p6wtp" for this suite. Apr 28 12:18:53.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:18:53.243: INFO: namespace: e2e-tests-kubectl-p6wtp, resource: bindings, ignored listing per whitelist Apr 28 12:18:53.307: INFO: namespace e2e-tests-kubectl-p6wtp deletion completed in 6.085859048s • [SLOW TEST:13.779 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:18:53.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-ksl98 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-ksl98 STEP: Deleting pre-stop pod Apr 28 12:19:06.488: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:19:06.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-ksl98" for this suite. Apr 28 12:19:44.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:19:44.608: INFO: namespace: e2e-tests-prestop-ksl98, resource: bindings, ignored listing per whitelist Apr 28 12:19:44.623: INFO: namespace e2e-tests-prestop-ksl98 deletion completed in 38.104852444s • [SLOW TEST:51.315 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:19:44.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 28 12:19:45.291: INFO: Pod name wrapped-volume-race-8bb3191b-894a-11ea-80e8-0242ac11000f: Found 0 pods out of 5 Apr 28 12:19:50.300: INFO: Pod name wrapped-volume-race-8bb3191b-894a-11ea-80e8-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8bb3191b-894a-11ea-80e8-0242ac11000f in namespace e2e-tests-emptydir-wrapper-m5t97, will wait for the garbage collector to delete the pods Apr 28 12:22:02.383: INFO: Deleting ReplicationController wrapped-volume-race-8bb3191b-894a-11ea-80e8-0242ac11000f took: 7.390124ms Apr 28 12:22:02.483: INFO: Terminating ReplicationController wrapped-volume-race-8bb3191b-894a-11ea-80e8-0242ac11000f pods took: 100.295852ms STEP: Creating RC which spawns configmap-volume pods Apr 28 12:22:42.449: INFO: Pod name wrapped-volume-race-f5466180-894a-11ea-80e8-0242ac11000f: Found 0 pods out of 5 Apr 28 12:22:47.456: INFO: Pod name wrapped-volume-race-f5466180-894a-11ea-80e8-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f5466180-894a-11ea-80e8-0242ac11000f in namespace e2e-tests-emptydir-wrapper-m5t97, will wait for the garbage collector to delete the pods Apr 28 12:24:51.571: INFO: Deleting ReplicationController wrapped-volume-race-f5466180-894a-11ea-80e8-0242ac11000f took: 7.740384ms Apr 28 12:24:51.672: INFO: Terminating ReplicationController wrapped-volume-race-f5466180-894a-11ea-80e8-0242ac11000f pods took: 100.298232ms STEP: Creating RC which spawns configmap-volume pods Apr 28 12:25:27.702: INFO: Pod name wrapped-volume-race-57cb75f4-894b-11ea-80e8-0242ac11000f: Found 0 pods out of 5 Apr 28 12:25:32.709: INFO: Pod name wrapped-volume-race-57cb75f4-894b-11ea-80e8-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-57cb75f4-894b-11ea-80e8-0242ac11000f in namespace e2e-tests-emptydir-wrapper-m5t97, will wait for the garbage collector to delete the pods Apr 28 12:28:18.793: INFO: Deleting ReplicationController wrapped-volume-race-57cb75f4-894b-11ea-80e8-0242ac11000f took: 7.259161ms Apr 28 12:28:18.894: INFO: Terminating ReplicationController wrapped-volume-race-57cb75f4-894b-11ea-80e8-0242ac11000f pods took: 100.270783ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:29:01.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-m5t97" for this suite. Apr 28 12:29:10.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:29:10.043: INFO: namespace: e2e-tests-emptydir-wrapper-m5t97, resource: bindings, ignored listing per whitelist Apr 28 12:29:10.081: INFO: namespace e2e-tests-emptydir-wrapper-m5t97 deletion completed in 8.087004499s • [SLOW TEST:565.458 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:29:10.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Apr 28 12:29:10.208: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-bdndc" to be "success or failure" Apr 28 12:29:10.230: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 21.579987ms Apr 28 12:29:12.234: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025779341s Apr 28 12:29:14.238: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030134051s STEP: Saw pod success Apr 28 12:29:14.238: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 28 12:29:14.241: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 28 12:29:14.262: INFO: Waiting for pod pod-host-path-test to disappear Apr 28 12:29:14.402: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:29:14.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-bdndc" for this suite. Apr 28 12:29:20.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:29:20.669: INFO: namespace: e2e-tests-hostpath-bdndc, resource: bindings, ignored listing per whitelist Apr 28 12:29:20.707: INFO: namespace e2e-tests-hostpath-bdndc deletion completed in 6.298949578s • [SLOW TEST:10.625 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:29:20.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0428 12:29:30.826845 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 12:29:30.826: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:29:30.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-tznjf" for this suite. Apr 28 12:29:36.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:29:36.912: INFO: namespace: e2e-tests-gc-tznjf, resource: bindings, ignored listing per whitelist Apr 28 12:29:36.934: INFO: namespace e2e-tests-gc-tznjf deletion completed in 6.104464044s • [SLOW TEST:16.227 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:29:36.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-ec6ce1bc-894b-11ea-80e8-0242ac11000f STEP: Creating a pod to test consume configMaps Apr 28 12:29:37.066: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec6fc9b1-894b-11ea-80e8-0242ac11000f" in namespace "e2e-tests-configmap-nkdnv" to be "success or failure" Apr 28 12:29:37.121: INFO: Pod "pod-configmaps-ec6fc9b1-894b-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 55.256865ms Apr 28 12:29:39.205: INFO: Pod "pod-configmaps-ec6fc9b1-894b-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13921786s Apr 28 12:29:41.209: INFO: Pod "pod-configmaps-ec6fc9b1-894b-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142671443s STEP: Saw pod success Apr 28 12:29:41.209: INFO: Pod "pod-configmaps-ec6fc9b1-894b-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:29:41.211: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-ec6fc9b1-894b-11ea-80e8-0242ac11000f container configmap-volume-test: STEP: delete the pod Apr 28 12:29:41.248: INFO: Waiting for pod pod-configmaps-ec6fc9b1-894b-11ea-80e8-0242ac11000f to disappear Apr 28 12:29:41.261: INFO: Pod pod-configmaps-ec6fc9b1-894b-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:29:41.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nkdnv" for this suite. Apr 28 12:29:47.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:29:47.295: INFO: namespace: e2e-tests-configmap-nkdnv, resource: bindings, ignored listing per whitelist Apr 28 12:29:47.353: INFO: namespace e2e-tests-configmap-nkdnv deletion completed in 6.088799681s • [SLOW TEST:10.418 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:29:47.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Apr 28 12:29:47.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f29d1cb0-894b-11ea-80e8-0242ac11000f" in namespace "e2e-tests-downward-api-5qv56" to be "success or failure" Apr 28 12:29:47.462: INFO: Pod "downwardapi-volume-f29d1cb0-894b-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.10839ms Apr 28 12:29:49.469: INFO: Pod "downwardapi-volume-f29d1cb0-894b-11ea-80e8-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031531325s Apr 28 12:29:51.511: INFO: Pod "downwardapi-volume-f29d1cb0-894b-11ea-80e8-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073681207s STEP: Saw pod success Apr 28 12:29:51.511: INFO: Pod "downwardapi-volume-f29d1cb0-894b-11ea-80e8-0242ac11000f" satisfied condition "success or failure" Apr 28 12:29:51.762: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f29d1cb0-894b-11ea-80e8-0242ac11000f container client-container: STEP: delete the pod Apr 28 12:29:51.847: INFO: Waiting for pod downwardapi-volume-f29d1cb0-894b-11ea-80e8-0242ac11000f to disappear Apr 28 12:29:51.911: INFO: Pod downwardapi-volume-f29d1cb0-894b-11ea-80e8-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:29:51.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5qv56" for this suite. Apr 28 12:29:57.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:29:57.946: INFO: namespace: e2e-tests-downward-api-5qv56, resource: bindings, ignored listing per whitelist Apr 28 12:29:58.003: INFO: namespace e2e-tests-downward-api-5qv56 deletion completed in 6.087946225s • [SLOW TEST:10.650 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Apr 28 12:29:58.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Apr 28 12:30:02.667: INFO: Successfully updated pod "annotationupdatef8fb6b6c-894b-11ea-80e8-0242ac11000f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Apr 28 12:30:04.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bntmp" for this suite. Apr 28 12:30:26.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:30:26.743: INFO: namespace: e2e-tests-projected-bntmp, resource: bindings, ignored listing per whitelist Apr 28 12:30:26.777: INFO: namespace e2e-tests-projected-bntmp deletion completed in 22.073971384s • [SLOW TEST:28.774 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSApr 28 12:30:26.777: INFO: Running AfterSuite actions on all nodes Apr 28 12:30:26.777: INFO: Running AfterSuite actions on node 1 Apr 28 12:30:26.777: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6190.159 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS