I0827 17:22:18.967354 6 e2e.go:224] Starting e2e run "dbdc887e-e889-11ea-b58c-0242ac11000b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598548938 - Will randomize all specs Will run 201 of 2164 specs Aug 27 17:22:19.134: INFO: >>> kubeConfig: /root/.kube/config Aug 27 17:22:19.136: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 27 17:22:19.149: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 27 17:22:20.362: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (1 seconds elapsed) Aug 27 17:22:20.362: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 27 17:22:20.362: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 27 17:22:20.372: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 27 17:22:20.372: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 27 17:22:20.372: INFO: e2e test version: v1.13.12 Aug 27 17:22:20.373: INFO: kube-apiserver version: v1.13.12 SSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:22:20.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Aug 27 17:22:23.322: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-pkdr4 Aug 27 17:22:38.464: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-pkdr4 STEP: checking the pod's current state and verifying that restartCount is present Aug 27 17:22:38.467: INFO: Initial restart count of pod liveness-http is 0 Aug 27 17:22:51.316: INFO: Restart count of pod e2e-tests-container-probe-pkdr4/liveness-http is now 1 (12.848652543s elapsed) Aug 27 17:23:11.651: INFO: Restart count of pod e2e-tests-container-probe-pkdr4/liveness-http is now 2 (33.183581735s elapsed) Aug 27 17:23:30.581: INFO: Restart count of pod e2e-tests-container-probe-pkdr4/liveness-http is now 3 (52.113554975s elapsed) Aug 27 17:23:50.944: INFO: Restart count of pod e2e-tests-container-probe-pkdr4/liveness-http is now 4 (1m12.47665863s elapsed) Aug 27 17:24:51.577: INFO: Restart count of pod e2e-tests-container-probe-pkdr4/liveness-http is now 5 (2m13.10986746s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:24:51.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-pkdr4" for this suite. Aug 27 17:24:58.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:24:58.221: INFO: namespace: e2e-tests-container-probe-pkdr4, resource: bindings, ignored listing per whitelist Aug 27 17:24:58.253: INFO: namespace e2e-tests-container-probe-pkdr4 deletion completed in 6.365500795s • [SLOW TEST:157.880 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:24:58.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Aug 27 17:25:08.463: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-3b20f8a0-e88a-11ea-b58c-0242ac11000b", GenerateName:"", Namespace:"e2e-tests-pods-ttk8f", SelfLink:"/api/v1/namespaces/e2e-tests-pods-ttk8f/pods/pod-submit-remove-3b20f8a0-e88a-11ea-b58c-0242ac11000b", UID:"3b244933-e88a-11ea-a485-0242ac120004", ResourceVersion:"2684818", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734145898, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"336894663"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-czmkv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00110ddc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-czmkv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0010dc918), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001cb73e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010dc960)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010dc980)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0010dc988), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0010dc98c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734145898, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734145906, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734145906, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734145898, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.8", PodIP:"10.244.2.198", StartTime:(*v1.Time)(0xc0010d7140), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0010d7160), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://e682e2b596eafeeec84428e56ad2559ce4b4b72777b36642154921bae449b858"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:25:18.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ttk8f" for this suite. Aug 27 17:25:24.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:25:24.430: INFO: namespace: e2e-tests-pods-ttk8f, resource: bindings, ignored listing per whitelist Aug 27 17:25:24.442: INFO: namespace e2e-tests-pods-ttk8f deletion completed in 6.086111955s • [SLOW TEST:26.189 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:25:24.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wnzdw A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wnzdw;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wnzdw A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wnzdw;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wnzdw.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wnzdw.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wnzdw.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wnzdw.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wnzdw.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wnzdw.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wnzdw.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wnzdw.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 53.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.53_udp@PTR;check="$$(dig +tcp +noall +answer +search 53.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.53_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wnzdw A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wnzdw;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wnzdw A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wnzdw.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wnzdw.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wnzdw.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wnzdw.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wnzdw.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wnzdw.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wnzdw.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wnzdw.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 53.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.53_udp@PTR;check="$$(dig +tcp +noall +answer +search 53.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.53_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 27 17:25:31.283: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:31.325: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:31.328: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:31.331: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wnzdw from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:31.334: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:31.337: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:31.340: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:31.342: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:31.345: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:31.362: INFO: Lookups using e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wnzdw jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw jessie_udp@dns-test-service.e2e-tests-dns-wnzdw.svc jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc] Aug 27 17:25:36.367: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:36.415: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:36.418: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:36.420: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wnzdw from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:36.465: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:36.468: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:36.471: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:36.498: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:36.537: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:36.554: INFO: Lookups using e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wnzdw jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw jessie_udp@dns-test-service.e2e-tests-dns-wnzdw.svc jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc] Aug 27 17:25:41.366: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:41.406: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:41.409: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:41.412: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wnzdw from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:41.415: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:41.418: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:41.422: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:41.425: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:41.428: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:41.446: INFO: Lookups using e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wnzdw jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw jessie_udp@dns-test-service.e2e-tests-dns-wnzdw.svc jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc] Aug 27 17:25:46.734: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:47.277: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:47.279: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:47.281: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wnzdw from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:47.283: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:47.286: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:47.288: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:47.291: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:47.293: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:47.311: INFO: Lookups using e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wnzdw jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw jessie_udp@dns-test-service.e2e-tests-dns-wnzdw.svc jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc] Aug 27 17:25:51.778: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:51.811: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:51.813: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:51.815: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wnzdw from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:51.816: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:51.818: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:51.820: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:51.821: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:51.823: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:25:51.836: INFO: Lookups using e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wnzdw jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw jessie_udp@dns-test-service.e2e-tests-dns-wnzdw.svc jessie_tcp@dns-test-service.e2e-tests-dns-wnzdw.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wnzdw.svc] Aug 27 17:25:57.790: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b: the server could not find the requested resource (get pods dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b) Aug 27 17:26:02.443: INFO: Lookups using e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b failed for: [wheezy_udp@dns-test-service] Aug 27 17:26:06.585: INFO: DNS probes using e2e-tests-dns-wnzdw/dns-test-4b0ae3b5-e88a-11ea-b58c-0242ac11000b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:26:12.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-wnzdw" for this suite. Aug 27 17:26:21.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:26:22.243: INFO: namespace: e2e-tests-dns-wnzdw, resource: bindings, ignored listing per whitelist Aug 27 17:26:22.548: INFO: namespace e2e-tests-dns-wnzdw deletion completed in 10.134693408s • [SLOW TEST:58.106 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:26:22.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:27:13.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-fmm4b" for this suite. Aug 27 17:27:19.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:27:19.457: INFO: namespace: e2e-tests-container-runtime-fmm4b, resource: bindings, ignored listing per whitelist Aug 27 17:27:19.459: INFO: namespace e2e-tests-container-runtime-fmm4b deletion completed in 6.08521236s • [SLOW TEST:56.911 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:27:19.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-8f523346-e88a-11ea-b58c-0242ac11000b STEP: Creating a pod to test consume configMaps Aug 27 17:27:19.605: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f52bf83-e88a-11ea-b58c-0242ac11000b" in namespace "e2e-tests-configmap-slvnx" to be "success or failure" Aug 27 17:27:19.629: INFO: Pod "pod-configmaps-8f52bf83-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.411229ms Aug 27 17:27:21.633: INFO: Pod "pod-configmaps-8f52bf83-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027249373s Aug 27 17:27:23.636: INFO: Pod "pod-configmaps-8f52bf83-e88a-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030608902s STEP: Saw pod success Aug 27 17:27:23.636: INFO: Pod "pod-configmaps-8f52bf83-e88a-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:27:23.638: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-8f52bf83-e88a-11ea-b58c-0242ac11000b container configmap-volume-test: STEP: delete the pod Aug 27 17:27:23.666: INFO: Waiting for pod pod-configmaps-8f52bf83-e88a-11ea-b58c-0242ac11000b to disappear Aug 27 17:27:23.693: INFO: Pod pod-configmaps-8f52bf83-e88a-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:27:23.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-slvnx" for this suite. Aug 27 17:27:29.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:27:29.813: INFO: namespace: e2e-tests-configmap-slvnx, resource: bindings, ignored listing per whitelist Aug 27 17:27:29.872: INFO: namespace e2e-tests-configmap-slvnx deletion completed in 6.176632153s • [SLOW TEST:10.413 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:27:29.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-958a91ee-e88a-11ea-b58c-0242ac11000b STEP: Creating secret with name s-test-opt-upd-958a925c-e88a-11ea-b58c-0242ac11000b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-958a91ee-e88a-11ea-b58c-0242ac11000b STEP: Updating secret s-test-opt-upd-958a925c-e88a-11ea-b58c-0242ac11000b STEP: Creating secret with name s-test-opt-create-958a9287-e88a-11ea-b58c-0242ac11000b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:27:38.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mfdxb" for this suite. Aug 27 17:28:04.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:28:04.462: INFO: namespace: e2e-tests-projected-mfdxb, resource: bindings, ignored listing per whitelist Aug 27 17:28:04.466: INFO: namespace e2e-tests-projected-mfdxb deletion completed in 26.097165079s • [SLOW TEST:34.594 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:28:04.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 27 17:28:06.569: INFO: Waiting up to 5m0s for pod "pod-ab22b611-e88a-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-cbng8" to be "success or failure" Aug 27 17:28:06.613: INFO: Pod "pod-ab22b611-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.191333ms Aug 27 17:28:08.742: INFO: Pod "pod-ab22b611-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173449354s Aug 27 17:28:11.211: INFO: Pod "pod-ab22b611-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.642141933s Aug 27 17:28:13.215: INFO: Pod "pod-ab22b611-e88a-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 6.646530967s Aug 27 17:28:15.239: INFO: Pod "pod-ab22b611-e88a-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.670165585s STEP: Saw pod success Aug 27 17:28:15.239: INFO: Pod "pod-ab22b611-e88a-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:28:15.241: INFO: Trying to get logs from node hunter-worker pod pod-ab22b611-e88a-11ea-b58c-0242ac11000b container test-container: STEP: delete the pod Aug 27 17:28:15.264: INFO: Waiting for pod pod-ab22b611-e88a-11ea-b58c-0242ac11000b to disappear Aug 27 17:28:15.313: INFO: Pod pod-ab22b611-e88a-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:28:15.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cbng8" for this suite. Aug 27 17:28:21.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:28:21.437: INFO: namespace: e2e-tests-emptydir-cbng8, resource: bindings, ignored listing per whitelist Aug 27 17:28:21.493: INFO: namespace e2e-tests-emptydir-cbng8 deletion completed in 6.175523371s • [SLOW TEST:17.026 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:28:21.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 27 17:28:21.828: INFO: Waiting up to 5m0s for pod "pod-b4612459-e88a-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-dlz2v" to be "success or failure" Aug 27 17:28:21.869: INFO: Pod "pod-b4612459-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.556382ms Aug 27 17:28:23.873: INFO: Pod "pod-b4612459-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044598984s Aug 27 17:28:26.092: INFO: Pod "pod-b4612459-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263374847s Aug 27 17:28:28.096: INFO: Pod "pod-b4612459-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267328119s Aug 27 17:28:30.150: INFO: Pod "pod-b4612459-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32159513s Aug 27 17:28:32.276: INFO: Pod "pod-b4612459-e88a-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.447713234s STEP: Saw pod success Aug 27 17:28:32.276: INFO: Pod "pod-b4612459-e88a-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:28:32.857: INFO: Trying to get logs from node hunter-worker pod pod-b4612459-e88a-11ea-b58c-0242ac11000b container test-container: STEP: delete the pod Aug 27 17:28:33.409: INFO: Waiting for pod pod-b4612459-e88a-11ea-b58c-0242ac11000b to disappear Aug 27 17:28:33.476: INFO: Pod pod-b4612459-e88a-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:28:33.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dlz2v" for this suite. Aug 27 17:28:41.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:28:41.782: INFO: namespace: e2e-tests-emptydir-dlz2v, resource: bindings, ignored listing per whitelist Aug 27 17:28:41.807: INFO: namespace e2e-tests-emptydir-dlz2v deletion completed in 8.326345909s • [SLOW TEST:20.314 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:28:41.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-c0936627-e88a-11ea-b58c-0242ac11000b STEP: Creating a pod to test consume configMaps Aug 27 17:28:42.461: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c0999552-e88a-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-wjbpw" to be "success or failure" Aug 27 17:28:42.465: INFO: Pod "pod-projected-configmaps-c0999552-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.413538ms Aug 27 17:28:44.468: INFO: Pod "pod-projected-configmaps-c0999552-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006679027s Aug 27 17:28:46.471: INFO: Pod "pod-projected-configmaps-c0999552-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009349854s Aug 27 17:28:48.474: INFO: Pod "pod-projected-configmaps-c0999552-e88a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012767442s Aug 27 17:28:50.737: INFO: Pod "pod-projected-configmaps-c0999552-e88a-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.275398032s STEP: Saw pod success Aug 27 17:28:50.737: INFO: Pod "pod-projected-configmaps-c0999552-e88a-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:28:50.828: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-c0999552-e88a-11ea-b58c-0242ac11000b container projected-configmap-volume-test: STEP: delete the pod Aug 27 17:28:51.189: INFO: Waiting for pod pod-projected-configmaps-c0999552-e88a-11ea-b58c-0242ac11000b to disappear Aug 27 17:28:51.383: INFO: Pod pod-projected-configmaps-c0999552-e88a-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:28:51.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wjbpw" for this suite. Aug 27 17:29:01.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:29:01.851: INFO: namespace: e2e-tests-projected-wjbpw, resource: bindings, ignored listing per whitelist Aug 27 17:29:01.890: INFO: namespace e2e-tests-projected-wjbpw deletion completed in 10.502463366s • [SLOW TEST:20.083 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:29:01.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 27 17:29:16.272: INFO: Successfully updated pod "annotationupdatecce88e88-e88a-11ea-b58c-0242ac11000b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:29:18.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2ck8v" for this suite. Aug 27 17:29:48.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:29:48.770: INFO: namespace: e2e-tests-downward-api-2ck8v, resource: bindings, ignored listing per whitelist Aug 27 17:29:48.800: INFO: namespace e2e-tests-downward-api-2ck8v deletion completed in 30.21918251s • [SLOW TEST:46.910 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:29:48.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Aug 27 17:29:49.553: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 17:29:49.561: INFO: Waiting for terminating namespaces to be deleted... Aug 27 17:29:49.563: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Aug 27 17:29:49.568: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded) Aug 27 17:29:49.568: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 17:29:49.568: INFO: rally-a0035e6c-0q7zegi3-7f9d59c68-b7x9w from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:15:14 +0000 UTC (1 container statuses recorded) Aug 27 17:29:49.568: INFO: Container rally-a0035e6c-0q7zegi3 ready: true, restart count 92 Aug 27 17:29:49.568: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 17:29:49.568: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 17:29:49.568: INFO: rally-512f71fd-snnx9euy from c-rally-512f71fd-e8oba5qh started at 2020-08-27 17:29:24 +0000 UTC (1 container statuses recorded) Aug 27 17:29:49.568: INFO: Container rally-512f71fd-snnx9euy ready: true, restart count 0 Aug 27 17:29:49.568: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Aug 27 17:29:49.646: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 17:29:49.646: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 17:29:49.646: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 17:29:49.646: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 17:29:49.646: INFO: rally-a0035e6c-x0kfgasz-79fb6568cc-vpxdp from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:14:52 +0000 UTC (1 container statuses recorded) Aug 27 17:29:49.646: INFO: Container rally-a0035e6c-x0kfgasz ready: true, restart count 92 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162f3033512b8c2b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:29:50.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-tndnc" for this suite. Aug 27 17:29:56.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:29:56.786: INFO: namespace: e2e-tests-sched-pred-tndnc, resource: bindings, ignored listing per whitelist Aug 27 17:29:56.844: INFO: namespace e2e-tests-sched-pred-tndnc deletion completed in 6.133161129s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:8.043 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:29:56.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Aug 27 17:29:57.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Aug 27 17:30:00.060: INFO: stderr: "" Aug 27 17:30:00.060: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45087\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45087/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:30:00.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v2k9t" for this suite. Aug 27 17:30:06.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:30:06.111: INFO: namespace: e2e-tests-kubectl-v2k9t, resource: bindings, ignored listing per whitelist Aug 27 17:30:06.171: INFO: namespace e2e-tests-kubectl-v2k9t deletion completed in 6.107906479s • [SLOW TEST:9.327 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:30:06.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-z8wb8.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-z8wb8.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-z8wb8.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-z8wb8.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-z8wb8.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-z8wb8.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 27 17:30:14.754: INFO: DNS probes using e2e-tests-dns-z8wb8/dns-test-f2bae966-e88a-11ea-b58c-0242ac11000b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:30:14.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-z8wb8" for this suite. Aug 27 17:30:20.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:30:20.815: INFO: namespace: e2e-tests-dns-z8wb8, resource: bindings, ignored listing per whitelist Aug 27 17:30:20.878: INFO: namespace e2e-tests-dns-z8wb8 deletion completed in 6.083837513s • [SLOW TEST:14.706 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:30:20.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 27 17:30:21.011: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:30:30.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-n2n4j" for this suite. Aug 27 17:30:52.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:30:52.376: INFO: namespace: e2e-tests-init-container-n2n4j, resource: bindings, ignored listing per whitelist Aug 27 17:30:52.381: INFO: namespace e2e-tests-init-container-n2n4j deletion completed in 22.083740791s • [SLOW TEST:31.503 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:30:52.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0827 17:31:02.517698 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 27 17:31:02.517: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:31:02.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-fchx5" for this suite. Aug 27 17:31:10.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:31:10.650: INFO: namespace: e2e-tests-gc-fchx5, resource: bindings, ignored listing per whitelist Aug 27 17:31:10.654: INFO: namespace e2e-tests-gc-fchx5 deletion completed in 8.133302305s • [SLOW TEST:18.273 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:31:10.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 27 17:31:10.776: INFO: Waiting up to 5m0s for pod "pod-191a6cc0-e88b-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-jgwxb" to be "success or failure" Aug 27 17:31:10.816: INFO: Pod "pod-191a6cc0-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 39.161735ms Aug 27 17:31:13.151: INFO: Pod "pod-191a6cc0-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.374109141s Aug 27 17:31:15.155: INFO: Pod "pod-191a6cc0-e88b-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.378563909s Aug 27 17:31:17.159: INFO: Pod "pod-191a6cc0-e88b-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.382632579s STEP: Saw pod success Aug 27 17:31:17.159: INFO: Pod "pod-191a6cc0-e88b-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:31:17.162: INFO: Trying to get logs from node hunter-worker2 pod pod-191a6cc0-e88b-11ea-b58c-0242ac11000b container test-container: STEP: delete the pod Aug 27 17:31:17.224: INFO: Waiting for pod pod-191a6cc0-e88b-11ea-b58c-0242ac11000b to disappear Aug 27 17:31:17.337: INFO: Pod pod-191a6cc0-e88b-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:31:17.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jgwxb" for this suite. Aug 27 17:31:23.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:31:23.445: INFO: namespace: e2e-tests-emptydir-jgwxb, resource: bindings, ignored listing per whitelist Aug 27 17:31:23.529: INFO: namespace e2e-tests-emptydir-jgwxb deletion completed in 6.188017396s • [SLOW TEST:12.874 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:31:23.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-20cc0279-e88b-11ea-b58c-0242ac11000b STEP: Creating a pod to test consume configMaps Aug 27 17:31:23.671: INFO: Waiting up to 5m0s for pod "pod-configmaps-20cd28a2-e88b-11ea-b58c-0242ac11000b" in namespace "e2e-tests-configmap-l2jxg" to be "success or failure" Aug 27 17:31:23.703: INFO: Pod "pod-configmaps-20cd28a2-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.648386ms Aug 27 17:31:25.750: INFO: Pod "pod-configmaps-20cd28a2-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079475964s Aug 27 17:31:27.755: INFO: Pod "pod-configmaps-20cd28a2-e88b-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083845736s STEP: Saw pod success Aug 27 17:31:27.755: INFO: Pod "pod-configmaps-20cd28a2-e88b-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:31:27.758: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-20cd28a2-e88b-11ea-b58c-0242ac11000b container configmap-volume-test: STEP: delete the pod Aug 27 17:31:27.778: INFO: Waiting for pod pod-configmaps-20cd28a2-e88b-11ea-b58c-0242ac11000b to disappear Aug 27 17:31:27.783: INFO: Pod pod-configmaps-20cd28a2-e88b-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:31:27.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-l2jxg" for this suite. Aug 27 17:31:33.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:31:33.875: INFO: namespace: e2e-tests-configmap-l2jxg, resource: bindings, ignored listing per whitelist Aug 27 17:31:33.888: INFO: namespace e2e-tests-configmap-l2jxg deletion completed in 6.102661807s • [SLOW TEST:10.359 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:31:33.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Aug 27 17:31:34.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:31:34.354: INFO: stderr: "" Aug 27 17:31:34.354: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 27 17:31:34.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:31:34.553: INFO: stderr: "" Aug 27 17:31:34.553: INFO: stdout: "update-demo-nautilus-9vqdv update-demo-nautilus-svtk9 " Aug 27 17:31:34.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9vqdv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:31:34.652: INFO: stderr: "" Aug 27 17:31:34.652: INFO: stdout: "" Aug 27 17:31:34.652: INFO: update-demo-nautilus-9vqdv is created but not running Aug 27 17:31:39.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:31:39.766: INFO: stderr: "" Aug 27 17:31:39.766: INFO: stdout: "update-demo-nautilus-9vqdv update-demo-nautilus-svtk9 " Aug 27 17:31:39.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9vqdv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:31:39.856: INFO: stderr: "" Aug 27 17:31:39.856: INFO: stdout: "" Aug 27 17:31:39.856: INFO: update-demo-nautilus-9vqdv is created but not running Aug 27 17:31:44.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:31:44.956: INFO: stderr: "" Aug 27 17:31:44.956: INFO: stdout: "update-demo-nautilus-9vqdv update-demo-nautilus-svtk9 " Aug 27 17:31:44.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9vqdv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:31:45.063: INFO: stderr: "" Aug 27 17:31:45.063: INFO: stdout: "true" Aug 27 17:31:45.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9vqdv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:31:45.157: INFO: stderr: "" Aug 27 17:31:45.157: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 27 17:31:45.157: INFO: validating pod update-demo-nautilus-9vqdv Aug 27 17:31:45.161: INFO: got data: { "image": "nautilus.jpg" } Aug 27 17:31:45.161: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 27 17:31:45.161: INFO: update-demo-nautilus-9vqdv is verified up and running Aug 27 17:31:45.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-svtk9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:31:45.263: INFO: stderr: "" Aug 27 17:31:45.263: INFO: stdout: "true" Aug 27 17:31:45.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-svtk9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:31:45.373: INFO: stderr: "" Aug 27 17:31:45.374: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 27 17:31:45.374: INFO: validating pod update-demo-nautilus-svtk9 Aug 27 17:31:45.377: INFO: got data: { "image": "nautilus.jpg" } Aug 27 17:31:45.377: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 27 17:31:45.377: INFO: update-demo-nautilus-svtk9 is verified up and running STEP: rolling-update to new replication controller Aug 27 17:31:45.378: INFO: scanned /root for discovery docs: Aug 27 17:31:45.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:32:21.342: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 27 17:32:21.342: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 27 17:32:21.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:32:21.469: INFO: stderr: "" Aug 27 17:32:21.469: INFO: stdout: "update-demo-kitten-ftshz update-demo-kitten-hplwr " Aug 27 17:32:21.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ftshz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:32:21.745: INFO: stderr: "" Aug 27 17:32:21.745: INFO: stdout: "true" Aug 27 17:32:21.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ftshz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:32:21.955: INFO: stderr: "" Aug 27 17:32:21.955: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 27 17:32:21.955: INFO: validating pod update-demo-kitten-ftshz Aug 27 17:32:22.067: INFO: got data: { "image": "kitten.jpg" } Aug 27 17:32:22.067: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 27 17:32:22.067: INFO: update-demo-kitten-ftshz is verified up and running Aug 27 17:32:22.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hplwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:32:22.177: INFO: stderr: "" Aug 27 17:32:22.177: INFO: stdout: "true" Aug 27 17:32:22.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hplwr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t9qm2' Aug 27 17:32:22.270: INFO: stderr: "" Aug 27 17:32:22.270: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 27 17:32:22.270: INFO: validating pod update-demo-kitten-hplwr Aug 27 17:32:22.290: INFO: got data: { "image": "kitten.jpg" } Aug 27 17:32:22.290: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 27 17:32:22.290: INFO: update-demo-kitten-hplwr is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:32:22.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t9qm2" for this suite. Aug 27 17:32:46.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:32:46.412: INFO: namespace: e2e-tests-kubectl-t9qm2, resource: bindings, ignored listing per whitelist Aug 27 17:32:46.431: INFO: namespace e2e-tests-kubectl-t9qm2 deletion completed in 24.136677419s • [SLOW TEST:72.542 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:32:46.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-52a1c61b-e88b-11ea-b58c-0242ac11000b STEP: Creating a pod to test consume secrets Aug 27 17:32:47.606: INFO: Waiting up to 5m0s for pod "pod-secrets-52a920cb-e88b-11ea-b58c-0242ac11000b" in namespace "e2e-tests-secrets-tbpv9" to be "success or failure" Aug 27 17:32:48.415: INFO: Pod "pod-secrets-52a920cb-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 809.684401ms Aug 27 17:32:50.673: INFO: Pod "pod-secrets-52a920cb-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.067472215s Aug 27 17:32:52.677: INFO: Pod "pod-secrets-52a920cb-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.07086923s Aug 27 17:32:54.681: INFO: Pod "pod-secrets-52a920cb-e88b-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.075088s STEP: Saw pod success Aug 27 17:32:54.681: INFO: Pod "pod-secrets-52a920cb-e88b-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:32:54.684: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-52a920cb-e88b-11ea-b58c-0242ac11000b container secret-volume-test: STEP: delete the pod Aug 27 17:32:55.243: INFO: Waiting for pod pod-secrets-52a920cb-e88b-11ea-b58c-0242ac11000b to disappear Aug 27 17:32:55.397: INFO: Pod pod-secrets-52a920cb-e88b-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:32:55.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tbpv9" for this suite. Aug 27 17:33:05.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:33:05.533: INFO: namespace: e2e-tests-secrets-tbpv9, resource: bindings, ignored listing per whitelist Aug 27 17:33:05.793: INFO: namespace e2e-tests-secrets-tbpv9 deletion completed in 10.39122092s • [SLOW TEST:19.362 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:33:05.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-5e0f3203-e88b-11ea-b58c-0242ac11000b STEP: Creating a pod to test consume configMaps Aug 27 17:33:06.921: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-9d5fv" to be "success or failure" Aug 27 17:33:07.094: INFO: Pod "pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 172.964088ms Aug 27 17:33:09.787: INFO: Pod "pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.865606432s Aug 27 17:33:12.272: INFO: Pod "pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.351102145s Aug 27 17:33:14.276: INFO: Pod "pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.355045204s Aug 27 17:33:16.638: INFO: Pod "pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.716916502s Aug 27 17:33:19.051: INFO: Pod "pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.129936601s Aug 27 17:33:21.140: INFO: Pod "pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.218840123s STEP: Saw pod success Aug 27 17:33:21.140: INFO: Pod "pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:33:21.142: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b container projected-configmap-volume-test: STEP: delete the pod Aug 27 17:33:21.998: INFO: Waiting for pod pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b to disappear Aug 27 17:33:22.517: INFO: Pod pod-projected-configmaps-5e10d689-e88b-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:33:22.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9d5fv" for this suite. Aug 27 17:33:38.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:33:38.994: INFO: namespace: e2e-tests-projected-9d5fv, resource: bindings, ignored listing per whitelist Aug 27 17:33:39.005: INFO: namespace e2e-tests-projected-9d5fv deletion completed in 16.48308191s • [SLOW TEST:33.211 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:33:39.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:33:42.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-r9vj4" for this suite. Aug 27 17:34:05.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:34:05.420: INFO: namespace: e2e-tests-pods-r9vj4, resource: bindings, ignored listing per whitelist Aug 27 17:34:05.445: INFO: namespace e2e-tests-pods-r9vj4 deletion completed in 23.041768324s • [SLOW TEST:26.440 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:34:05.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-fmdxl Aug 27 17:34:09.626: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-fmdxl STEP: checking the pod's current state and verifying that restartCount is present Aug 27 17:34:09.629: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:38:10.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-fmdxl" for this suite. Aug 27 17:38:17.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:38:17.680: INFO: namespace: e2e-tests-container-probe-fmdxl, resource: bindings, ignored listing per whitelist Aug 27 17:38:17.701: INFO: namespace e2e-tests-container-probe-fmdxl deletion completed in 6.557172409s • [SLOW TEST:252.256 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:38:17.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 27 17:38:17.877: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4nm2h,SelfLink:/api/v1/namespaces/e2e-tests-watch-4nm2h/configmaps/e2e-watch-test-watch-closed,UID:17a6f5ed-e88c-11ea-a485-0242ac120004,ResourceVersion:2687778,Generation:0,CreationTimestamp:2020-08-27 17:38:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 27 17:38:17.877: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4nm2h,SelfLink:/api/v1/namespaces/e2e-tests-watch-4nm2h/configmaps/e2e-watch-test-watch-closed,UID:17a6f5ed-e88c-11ea-a485-0242ac120004,ResourceVersion:2687779,Generation:0,CreationTimestamp:2020-08-27 17:38:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 27 17:38:18.055: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4nm2h,SelfLink:/api/v1/namespaces/e2e-tests-watch-4nm2h/configmaps/e2e-watch-test-watch-closed,UID:17a6f5ed-e88c-11ea-a485-0242ac120004,ResourceVersion:2687780,Generation:0,CreationTimestamp:2020-08-27 17:38:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 27 17:38:18.055: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4nm2h,SelfLink:/api/v1/namespaces/e2e-tests-watch-4nm2h/configmaps/e2e-watch-test-watch-closed,UID:17a6f5ed-e88c-11ea-a485-0242ac120004,ResourceVersion:2687781,Generation:0,CreationTimestamp:2020-08-27 17:38:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:38:18.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-4nm2h" for this suite. Aug 27 17:38:24.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:38:24.491: INFO: namespace: e2e-tests-watch-4nm2h, resource: bindings, ignored listing per whitelist Aug 27 17:38:24.492: INFO: namespace e2e-tests-watch-4nm2h deletion completed in 6.396979973s • [SLOW TEST:6.790 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:38:24.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 27 17:38:24.628: INFO: Waiting up to 5m0s for pod "downward-api-1bb665b5-e88c-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-2hkl2" to be "success or failure" Aug 27 17:38:24.766: INFO: Pod "downward-api-1bb665b5-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 137.693261ms Aug 27 17:38:26.770: INFO: Pod "downward-api-1bb665b5-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141573075s Aug 27 17:38:28.772: INFO: Pod "downward-api-1bb665b5-e88c-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.144403253s Aug 27 17:38:30.999: INFO: Pod "downward-api-1bb665b5-e88c-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.37136193s STEP: Saw pod success Aug 27 17:38:30.999: INFO: Pod "downward-api-1bb665b5-e88c-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:38:31.002: INFO: Trying to get logs from node hunter-worker2 pod downward-api-1bb665b5-e88c-11ea-b58c-0242ac11000b container dapi-container: STEP: delete the pod Aug 27 17:38:31.094: INFO: Waiting for pod downward-api-1bb665b5-e88c-11ea-b58c-0242ac11000b to disappear Aug 27 17:38:31.322: INFO: Pod downward-api-1bb665b5-e88c-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:38:31.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2hkl2" for this suite. Aug 27 17:38:37.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:38:37.541: INFO: namespace: e2e-tests-downward-api-2hkl2, resource: bindings, ignored listing per whitelist Aug 27 17:38:37.603: INFO: namespace e2e-tests-downward-api-2hkl2 deletion completed in 6.27788363s • [SLOW TEST:13.110 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:38:37.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 27 17:38:37.815: INFO: Waiting up to 5m0s for pod "downward-api-238e2b74-e88c-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-fz9xh" to be "success or failure" Aug 27 17:38:37.866: INFO: Pod "downward-api-238e2b74-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 50.25026ms Aug 27 17:38:39.870: INFO: Pod "downward-api-238e2b74-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054807565s Aug 27 17:38:41.874: INFO: Pod "downward-api-238e2b74-e88c-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058659304s STEP: Saw pod success Aug 27 17:38:41.874: INFO: Pod "downward-api-238e2b74-e88c-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:38:41.877: INFO: Trying to get logs from node hunter-worker pod downward-api-238e2b74-e88c-11ea-b58c-0242ac11000b container dapi-container: STEP: delete the pod Aug 27 17:38:42.032: INFO: Waiting for pod downward-api-238e2b74-e88c-11ea-b58c-0242ac11000b to disappear Aug 27 17:38:42.066: INFO: Pod downward-api-238e2b74-e88c-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:38:42.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fz9xh" for this suite. Aug 27 17:38:48.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:38:48.242: INFO: namespace: e2e-tests-downward-api-fz9xh, resource: bindings, ignored listing per whitelist Aug 27 17:38:48.250: INFO: namespace e2e-tests-downward-api-fz9xh deletion completed in 6.180199131s • [SLOW TEST:10.647 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:38:48.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 27 17:38:56.636: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-29eab516-e88c-11ea-b58c-0242ac11000b,GenerateName:,Namespace:e2e-tests-events-g5vml,SelfLink:/api/v1/namespaces/e2e-tests-events-g5vml/pods/send-events-29eab516-e88c-11ea-b58c-0242ac11000b,UID:29ff29de-e88c-11ea-a485-0242ac120004,ResourceVersion:2687994,Generation:0,CreationTimestamp:2020-08-27 17:38:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 453466019,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wlq26 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wlq26,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-wlq26 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001feb600} {node.kubernetes.io/unreachable Exists NoExecute 0xc001feb620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:38:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:38:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:38:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:38:48 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.104,StartTime:2020-08-27 17:38:48 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-27 17:38:53 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://ba8fe669a1916ecfe7ae040b0aba6173bf3ea9a5b2d599e003e4bceb8e2a8781}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Aug 27 17:38:58.641: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 27 17:39:00.779: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:39:00.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-g5vml" for this suite. Aug 27 17:39:43.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:39:43.153: INFO: namespace: e2e-tests-events-g5vml, resource: bindings, ignored listing per whitelist Aug 27 17:39:43.184: INFO: namespace e2e-tests-events-g5vml deletion completed in 42.326095359s • [SLOW TEST:54.934 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:39:43.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-t8d2f Aug 27 17:39:52.750: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-t8d2f STEP: checking the pod's current state and verifying that restartCount is present Aug 27 17:39:52.753: INFO: Initial restart count of pod liveness-exec is 0 Aug 27 17:40:43.650: INFO: Restart count of pod e2e-tests-container-probe-t8d2f/liveness-exec is now 1 (50.896703249s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:40:43.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-t8d2f" for this suite. Aug 27 17:40:52.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:40:52.241: INFO: namespace: e2e-tests-container-probe-t8d2f, resource: bindings, ignored listing per whitelist Aug 27 17:40:52.262: INFO: namespace e2e-tests-container-probe-t8d2f deletion completed in 8.222186674s • [SLOW TEST:69.078 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:40:52.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 17:40:52.582: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73e41515-e88c-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-ndzmf" to be "success or failure" Aug 27 17:40:52.585: INFO: Pod "downwardapi-volume-73e41515-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.189068ms Aug 27 17:40:54.839: INFO: Pod "downwardapi-volume-73e41515-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257129365s Aug 27 17:40:56.924: INFO: Pod "downwardapi-volume-73e41515-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342412134s Aug 27 17:40:58.927: INFO: Pod "downwardapi-volume-73e41515-e88c-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.345450314s STEP: Saw pod success Aug 27 17:40:58.927: INFO: Pod "downwardapi-volume-73e41515-e88c-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:40:58.929: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-73e41515-e88c-11ea-b58c-0242ac11000b container client-container: STEP: delete the pod Aug 27 17:40:59.058: INFO: Waiting for pod downwardapi-volume-73e41515-e88c-11ea-b58c-0242ac11000b to disappear Aug 27 17:40:59.076: INFO: Pod downwardapi-volume-73e41515-e88c-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:40:59.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ndzmf" for this suite. Aug 27 17:41:05.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:41:05.145: INFO: namespace: e2e-tests-downward-api-ndzmf, resource: bindings, ignored listing per whitelist Aug 27 17:41:05.233: INFO: namespace e2e-tests-downward-api-ndzmf deletion completed in 6.153748967s • [SLOW TEST:12.971 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:41:05.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Aug 27 17:41:05.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zq8q2' Aug 27 17:41:08.035: INFO: stderr: "" Aug 27 17:41:08.035: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Aug 27 17:41:09.040: INFO: Selector matched 1 pods for map[app:redis] Aug 27 17:41:09.040: INFO: Found 0 / 1 Aug 27 17:41:10.045: INFO: Selector matched 1 pods for map[app:redis] Aug 27 17:41:10.045: INFO: Found 0 / 1 Aug 27 17:41:11.096: INFO: Selector matched 1 pods for map[app:redis] Aug 27 17:41:11.096: INFO: Found 0 / 1 Aug 27 17:41:12.040: INFO: Selector matched 1 pods for map[app:redis] Aug 27 17:41:12.040: INFO: Found 0 / 1 Aug 27 17:41:13.052: INFO: Selector matched 1 pods for map[app:redis] Aug 27 17:41:13.052: INFO: Found 0 / 1 Aug 27 17:41:14.040: INFO: Selector matched 1 pods for map[app:redis] Aug 27 17:41:14.040: INFO: Found 1 / 1 Aug 27 17:41:14.041: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 27 17:41:14.045: INFO: Selector matched 1 pods for map[app:redis] Aug 27 17:41:14.045: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Aug 27 17:41:14.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bx8tk redis-master --namespace=e2e-tests-kubectl-zq8q2' Aug 27 17:41:14.164: INFO: stderr: "" Aug 27 17:41:14.164: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Aug 17:41:13.011 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Aug 17:41:13.011 # Server started, Redis version 3.2.12\n1:M 27 Aug 17:41:13.011 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Aug 17:41:13.011 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Aug 27 17:41:14.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bx8tk redis-master --namespace=e2e-tests-kubectl-zq8q2 --tail=1' Aug 27 17:41:14.271: INFO: stderr: "" Aug 27 17:41:14.271: INFO: stdout: "1:M 27 Aug 17:41:13.011 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Aug 27 17:41:14.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bx8tk redis-master --namespace=e2e-tests-kubectl-zq8q2 --limit-bytes=1' Aug 27 17:41:14.596: INFO: stderr: "" Aug 27 17:41:14.596: INFO: stdout: " " STEP: exposing timestamps Aug 27 17:41:14.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bx8tk redis-master --namespace=e2e-tests-kubectl-zq8q2 --tail=1 --timestamps' Aug 27 17:41:14.709: INFO: stderr: "" Aug 27 17:41:14.709: INFO: stdout: "2020-08-27T17:41:13.011951248Z 1:M 27 Aug 17:41:13.011 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Aug 27 17:41:17.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bx8tk redis-master --namespace=e2e-tests-kubectl-zq8q2 --since=1s' Aug 27 17:41:17.312: INFO: stderr: "" Aug 27 17:41:17.312: INFO: stdout: "" Aug 27 17:41:17.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bx8tk redis-master --namespace=e2e-tests-kubectl-zq8q2 --since=24h' Aug 27 17:41:17.428: INFO: stderr: "" Aug 27 17:41:17.428: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Aug 17:41:13.011 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Aug 17:41:13.011 # Server started, Redis version 3.2.12\n1:M 27 Aug 17:41:13.011 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Aug 17:41:13.011 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Aug 27 17:41:17.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zq8q2' Aug 27 17:41:17.555: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 17:41:17.555: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Aug 27 17:41:17.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-zq8q2' Aug 27 17:41:17.666: INFO: stderr: "No resources found.\n" Aug 27 17:41:17.666: INFO: stdout: "" Aug 27 17:41:17.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-zq8q2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 27 17:41:18.005: INFO: stderr: "" Aug 27 17:41:18.005: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:41:18.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zq8q2" for this suite. Aug 27 17:41:24.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:41:24.136: INFO: namespace: e2e-tests-kubectl-zq8q2, resource: bindings, ignored listing per whitelist Aug 27 17:41:24.139: INFO: namespace e2e-tests-kubectl-zq8q2 deletion completed in 6.130777893s • [SLOW TEST:18.906 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:41:24.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Aug 27 17:41:24.242: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 17:41:24.261: INFO: Waiting for terminating namespaces to be deleted... Aug 27 17:41:24.264: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Aug 27 17:41:24.269: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 17:41:24.269: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 17:41:24.269: INFO: rally-a0035e6c-0q7zegi3-7f9d59c68-b7x9w from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:15:14 +0000 UTC (1 container statuses recorded) Aug 27 17:41:24.269: INFO: Container rally-a0035e6c-0q7zegi3 ready: true, restart count 92 Aug 27 17:41:24.269: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded) Aug 27 17:41:24.269: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 17:41:24.269: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Aug 27 17:41:24.274: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 17:41:24.274: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 17:41:24.274: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 17:41:24.274: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 17:41:24.274: INFO: rally-a0035e6c-x0kfgasz-79fb6568cc-vpxdp from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:14:52 +0000 UTC (1 container statuses recorded) Aug 27 17:41:24.274: INFO: Container rally-a0035e6c-x0kfgasz ready: true, restart count 92 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8a7cc2cb-e88c-11ea-b58c-0242ac11000b 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-8a7cc2cb-e88c-11ea-b58c-0242ac11000b off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-8a7cc2cb-e88c-11ea-b58c-0242ac11000b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:41:35.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-fnh8d" for this suite. Aug 27 17:41:45.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:41:45.570: INFO: namespace: e2e-tests-sched-pred-fnh8d, resource: bindings, ignored listing per whitelist Aug 27 17:41:45.624: INFO: namespace e2e-tests-sched-pred-fnh8d deletion completed in 10.111847176s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:21.484 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:41:45.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-w87f STEP: Creating a pod to test atomic-volume-subpath Aug 27 17:41:45.853: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-w87f" in namespace "e2e-tests-subpath-x6hdk" to be "success or failure" Aug 27 17:41:45.862: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.731187ms Aug 27 17:41:47.867: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014008039s Aug 27 17:41:49.871: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018461019s Aug 27 17:41:51.950: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097526774s Aug 27 17:41:53.955: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Running", Reason="", readiness=true. Elapsed: 8.101999522s Aug 27 17:41:55.958: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Running", Reason="", readiness=false. Elapsed: 10.105744701s Aug 27 17:41:58.016: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Running", Reason="", readiness=false. Elapsed: 12.163462979s Aug 27 17:42:00.020: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Running", Reason="", readiness=false. Elapsed: 14.16697177s Aug 27 17:42:02.023: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Running", Reason="", readiness=false. Elapsed: 16.170827011s Aug 27 17:42:04.073: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Running", Reason="", readiness=false. Elapsed: 18.219922026s Aug 27 17:42:06.077: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Running", Reason="", readiness=false. Elapsed: 20.2238923s Aug 27 17:42:08.370: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Running", Reason="", readiness=false. Elapsed: 22.51731664s Aug 27 17:42:10.374: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Running", Reason="", readiness=false. Elapsed: 24.520988398s Aug 27 17:42:12.377: INFO: Pod "pod-subpath-test-configmap-w87f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.524747671s STEP: Saw pod success Aug 27 17:42:12.377: INFO: Pod "pod-subpath-test-configmap-w87f" satisfied condition "success or failure" Aug 27 17:42:12.380: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-w87f container test-container-subpath-configmap-w87f: STEP: delete the pod Aug 27 17:42:12.418: INFO: Waiting for pod pod-subpath-test-configmap-w87f to disappear Aug 27 17:42:12.431: INFO: Pod pod-subpath-test-configmap-w87f no longer exists STEP: Deleting pod pod-subpath-test-configmap-w87f Aug 27 17:42:12.431: INFO: Deleting pod "pod-subpath-test-configmap-w87f" in namespace "e2e-tests-subpath-x6hdk" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:42:12.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-x6hdk" for this suite. Aug 27 17:42:18.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:42:18.691: INFO: namespace: e2e-tests-subpath-x6hdk, resource: bindings, ignored listing per whitelist Aug 27 17:42:18.755: INFO: namespace e2e-tests-subpath-x6hdk deletion completed in 6.31109523s • [SLOW TEST:33.131 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:42:18.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-a758d3d7-e88c-11ea-b58c-0242ac11000b STEP: Creating a pod to test consume configMaps Aug 27 17:42:18.913: INFO: Waiting up to 5m0s for pod "pod-configmaps-a75b46f6-e88c-11ea-b58c-0242ac11000b" in namespace "e2e-tests-configmap-2dhwd" to be "success or failure" Aug 27 17:42:18.936: INFO: Pod "pod-configmaps-a75b46f6-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.789659ms Aug 27 17:42:20.940: INFO: Pod "pod-configmaps-a75b46f6-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026484266s Aug 27 17:42:22.943: INFO: Pod "pod-configmaps-a75b46f6-e88c-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03020734s STEP: Saw pod success Aug 27 17:42:22.943: INFO: Pod "pod-configmaps-a75b46f6-e88c-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:42:22.946: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-a75b46f6-e88c-11ea-b58c-0242ac11000b container configmap-volume-test: STEP: delete the pod Aug 27 17:42:22.999: INFO: Waiting for pod pod-configmaps-a75b46f6-e88c-11ea-b58c-0242ac11000b to disappear Aug 27 17:42:23.019: INFO: Pod pod-configmaps-a75b46f6-e88c-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:42:23.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2dhwd" for this suite. Aug 27 17:42:29.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:42:29.109: INFO: namespace: e2e-tests-configmap-2dhwd, resource: bindings, ignored listing per whitelist Aug 27 17:42:29.131: INFO: namespace e2e-tests-configmap-2dhwd deletion completed in 6.106993925s • [SLOW TEST:10.375 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:42:29.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 17:42:29.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad8110aa-e88c-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-2ng6c" to be "success or failure" Aug 27 17:42:29.268: INFO: Pod "downwardapi-volume-ad8110aa-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.880758ms Aug 27 17:42:31.270: INFO: Pod "downwardapi-volume-ad8110aa-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04542258s Aug 27 17:42:33.301: INFO: Pod "downwardapi-volume-ad8110aa-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076035875s Aug 27 17:42:35.304: INFO: Pod "downwardapi-volume-ad8110aa-e88c-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079437192s STEP: Saw pod success Aug 27 17:42:35.304: INFO: Pod "downwardapi-volume-ad8110aa-e88c-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:42:35.307: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ad8110aa-e88c-11ea-b58c-0242ac11000b container client-container: STEP: delete the pod Aug 27 17:42:35.380: INFO: Waiting for pod downwardapi-volume-ad8110aa-e88c-11ea-b58c-0242ac11000b to disappear Aug 27 17:42:35.420: INFO: Pod downwardapi-volume-ad8110aa-e88c-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:42:35.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2ng6c" for this suite. Aug 27 17:42:41.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:42:41.600: INFO: namespace: e2e-tests-projected-2ng6c, resource: bindings, ignored listing per whitelist Aug 27 17:42:41.611: INFO: namespace e2e-tests-projected-2ng6c deletion completed in 6.187507719s • [SLOW TEST:12.480 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:42:41.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 17:42:41.722: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4f13953-e88c-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-klfss" to be "success or failure" Aug 27 17:42:41.739: INFO: Pod "downwardapi-volume-b4f13953-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.453543ms Aug 27 17:42:43.877: INFO: Pod "downwardapi-volume-b4f13953-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15429349s Aug 27 17:42:46.161: INFO: Pod "downwardapi-volume-b4f13953-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438065219s Aug 27 17:42:48.165: INFO: Pod "downwardapi-volume-b4f13953-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442433111s Aug 27 17:42:50.168: INFO: Pod "downwardapi-volume-b4f13953-e88c-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.445345436s STEP: Saw pod success Aug 27 17:42:50.168: INFO: Pod "downwardapi-volume-b4f13953-e88c-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:42:50.170: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b4f13953-e88c-11ea-b58c-0242ac11000b container client-container: STEP: delete the pod Aug 27 17:42:50.243: INFO: Waiting for pod downwardapi-volume-b4f13953-e88c-11ea-b58c-0242ac11000b to disappear Aug 27 17:42:50.288: INFO: Pod downwardapi-volume-b4f13953-e88c-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:42:50.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-klfss" for this suite. Aug 27 17:42:58.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:42:58.387: INFO: namespace: e2e-tests-downward-api-klfss, resource: bindings, ignored listing per whitelist Aug 27 17:42:58.416: INFO: namespace e2e-tests-downward-api-klfss deletion completed in 8.125390853s • [SLOW TEST:16.805 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:42:58.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-befd6bd1-e88c-11ea-b58c-0242ac11000b STEP: Creating a pod to test consume secrets Aug 27 17:42:58.705: INFO: Waiting up to 5m0s for pod "pod-secrets-bf10fb25-e88c-11ea-b58c-0242ac11000b" in namespace "e2e-tests-secrets-bcbq6" to be "success or failure" Aug 27 17:42:58.709: INFO: Pod "pod-secrets-bf10fb25-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.296392ms Aug 27 17:43:00.820: INFO: Pod "pod-secrets-bf10fb25-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114344843s Aug 27 17:43:02.855: INFO: Pod "pod-secrets-bf10fb25-e88c-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149782078s Aug 27 17:43:04.859: INFO: Pod "pod-secrets-bf10fb25-e88c-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153642998s STEP: Saw pod success Aug 27 17:43:04.859: INFO: Pod "pod-secrets-bf10fb25-e88c-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:43:04.861: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-bf10fb25-e88c-11ea-b58c-0242ac11000b container secret-volume-test: STEP: delete the pod Aug 27 17:43:05.010: INFO: Waiting for pod pod-secrets-bf10fb25-e88c-11ea-b58c-0242ac11000b to disappear Aug 27 17:43:05.020: INFO: Pod pod-secrets-bf10fb25-e88c-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:43:05.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bcbq6" for this suite. Aug 27 17:43:11.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:43:11.084: INFO: namespace: e2e-tests-secrets-bcbq6, resource: bindings, ignored listing per whitelist Aug 27 17:43:11.107: INFO: namespace e2e-tests-secrets-bcbq6 deletion completed in 6.084363978s STEP: Destroying namespace "e2e-tests-secret-namespace-qd9t9" for this suite. Aug 27 17:43:17.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:43:17.170: INFO: namespace: e2e-tests-secret-namespace-qd9t9, resource: bindings, ignored listing per whitelist Aug 27 17:43:17.196: INFO: namespace e2e-tests-secret-namespace-qd9t9 deletion completed in 6.088103367s • [SLOW TEST:18.779 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:43:17.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:43:17.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-wk6vz" for this suite. Aug 27 17:43:23.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:43:23.489: INFO: namespace: e2e-tests-services-wk6vz, resource: bindings, ignored listing per whitelist Aug 27 17:43:23.515: INFO: namespace e2e-tests-services-wk6vz deletion completed in 6.220617462s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.319 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:43:23.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:43:32.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-82bdg" for this suite. Aug 27 17:44:24.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:44:25.168: INFO: namespace: e2e-tests-kubelet-test-82bdg, resource: bindings, ignored listing per whitelist Aug 27 17:44:25.188: INFO: namespace e2e-tests-kubelet-test-82bdg deletion completed in 52.645531561s • [SLOW TEST:61.673 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:44:25.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Aug 27 17:44:25.307: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix837713549/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:44:25.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-szv29" for this suite. Aug 27 17:44:31.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:44:31.611: INFO: namespace: e2e-tests-kubectl-szv29, resource: bindings, ignored listing per whitelist Aug 27 17:44:31.617: INFO: namespace e2e-tests-kubectl-szv29 deletion completed in 6.161142409s • [SLOW TEST:6.429 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:44:31.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:45:32.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-pshcm" for this suite. Aug 27 17:45:54.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:45:54.205: INFO: namespace: e2e-tests-container-probe-pshcm, resource: bindings, ignored listing per whitelist Aug 27 17:45:54.221: INFO: namespace e2e-tests-container-probe-pshcm deletion completed in 22.216031858s • [SLOW TEST:82.603 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:45:54.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 17:46:04.818: INFO: Waiting up to 5m0s for pod "client-envvars-2e011934-e88d-11ea-b58c-0242ac11000b" in namespace "e2e-tests-pods-fwvts" to be "success or failure" Aug 27 17:46:04.830: INFO: Pod "client-envvars-2e011934-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.650082ms Aug 27 17:46:06.836: INFO: Pod "client-envvars-2e011934-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017560671s Aug 27 17:46:08.877: INFO: Pod "client-envvars-2e011934-e88d-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058756183s STEP: Saw pod success Aug 27 17:46:08.877: INFO: Pod "client-envvars-2e011934-e88d-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:46:08.883: INFO: Trying to get logs from node hunter-worker pod client-envvars-2e011934-e88d-11ea-b58c-0242ac11000b container env3cont: STEP: delete the pod Aug 27 17:46:09.068: INFO: Waiting for pod client-envvars-2e011934-e88d-11ea-b58c-0242ac11000b to disappear Aug 27 17:46:09.260: INFO: Pod client-envvars-2e011934-e88d-11ea-b58c-0242ac11000b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:46:09.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fwvts" for this suite. Aug 27 17:47:01.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:47:01.313: INFO: namespace: e2e-tests-pods-fwvts, resource: bindings, ignored listing per whitelist Aug 27 17:47:01.357: INFO: namespace e2e-tests-pods-fwvts deletion completed in 52.09417147s • [SLOW TEST:67.136 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:47:01.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6dpm9 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 27 17:47:01.598: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 27 17:47:24.243: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.111 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6dpm9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 27 17:47:24.243: INFO: >>> kubeConfig: /root/.kube/config I0827 17:47:24.275050 6 log.go:172] (0xc001cd20b0) (0xc00141a320) Create stream I0827 17:47:24.275090 6 log.go:172] (0xc001cd20b0) (0xc00141a320) Stream added, broadcasting: 1 I0827 17:47:24.276522 6 log.go:172] (0xc001cd20b0) Reply frame received for 1 I0827 17:47:24.276555 6 log.go:172] (0xc001cd20b0) (0xc000f86000) Create stream I0827 17:47:24.276566 6 log.go:172] (0xc001cd20b0) (0xc000f86000) Stream added, broadcasting: 3 I0827 17:47:24.277160 6 log.go:172] (0xc001cd20b0) Reply frame received for 3 I0827 17:47:24.277198 6 log.go:172] (0xc001cd20b0) (0xc0020ca280) Create stream I0827 17:47:24.277212 6 log.go:172] (0xc001cd20b0) (0xc0020ca280) Stream added, broadcasting: 5 I0827 17:47:24.277835 6 log.go:172] (0xc001cd20b0) Reply frame received for 5 I0827 17:47:25.337687 6 log.go:172] (0xc001cd20b0) Data frame received for 3 I0827 17:47:25.337717 6 log.go:172] (0xc000f86000) (3) Data frame handling I0827 17:47:25.337735 6 log.go:172] (0xc000f86000) (3) Data frame sent I0827 17:47:25.337750 6 log.go:172] (0xc001cd20b0) Data frame received for 3 I0827 17:47:25.337759 6 log.go:172] (0xc000f86000) (3) Data frame handling I0827 17:47:25.337812 6 log.go:172] (0xc001cd20b0) Data frame received for 5 I0827 17:47:25.337833 6 log.go:172] (0xc0020ca280) (5) Data frame handling I0827 17:47:25.339408 6 log.go:172] (0xc001cd20b0) Data frame received for 1 I0827 17:47:25.339421 6 log.go:172] (0xc00141a320) (1) Data frame handling I0827 17:47:25.339428 6 log.go:172] (0xc00141a320) (1) Data frame sent I0827 17:47:25.339437 6 log.go:172] (0xc001cd20b0) (0xc00141a320) Stream removed, broadcasting: 1 I0827 17:47:25.339585 6 log.go:172] (0xc001cd20b0) (0xc00141a320) Stream removed, broadcasting: 1 I0827 17:47:25.339602 6 log.go:172] (0xc001cd20b0) (0xc000f86000) Stream removed, broadcasting: 3 I0827 17:47:25.339612 6 log.go:172] (0xc001cd20b0) (0xc0020ca280) Stream removed, broadcasting: 5 Aug 27 17:47:25.339: INFO: Found all expected endpoints: [netserver-0] I0827 17:47:25.339820 6 log.go:172] (0xc001cd20b0) Go away received Aug 27 17:47:25.342: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.228 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6dpm9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 27 17:47:25.342: INFO: >>> kubeConfig: /root/.kube/config I0827 17:47:25.371125 6 log.go:172] (0xc000d9f600) (0xc000f86280) Create stream I0827 17:47:25.371194 6 log.go:172] (0xc000d9f600) (0xc000f86280) Stream added, broadcasting: 1 I0827 17:47:25.373665 6 log.go:172] (0xc000d9f600) Reply frame received for 1 I0827 17:47:25.373702 6 log.go:172] (0xc000d9f600) (0xc0020ca320) Create stream I0827 17:47:25.373715 6 log.go:172] (0xc000d9f600) (0xc0020ca320) Stream added, broadcasting: 3 I0827 17:47:25.374655 6 log.go:172] (0xc000d9f600) Reply frame received for 3 I0827 17:47:25.374701 6 log.go:172] (0xc000d9f600) (0xc0020ca3c0) Create stream I0827 17:47:25.374715 6 log.go:172] (0xc000d9f600) (0xc0020ca3c0) Stream added, broadcasting: 5 I0827 17:47:25.375557 6 log.go:172] (0xc000d9f600) Reply frame received for 5 I0827 17:47:26.457733 6 log.go:172] (0xc000d9f600) Data frame received for 3 I0827 17:47:26.457760 6 log.go:172] (0xc0020ca320) (3) Data frame handling I0827 17:47:26.457773 6 log.go:172] (0xc0020ca320) (3) Data frame sent I0827 17:47:26.457816 6 log.go:172] (0xc000d9f600) Data frame received for 5 I0827 17:47:26.457825 6 log.go:172] (0xc0020ca3c0) (5) Data frame handling I0827 17:47:26.458013 6 log.go:172] (0xc000d9f600) Data frame received for 3 I0827 17:47:26.458071 6 log.go:172] (0xc0020ca320) (3) Data frame handling I0827 17:47:26.459934 6 log.go:172] (0xc000d9f600) Data frame received for 1 I0827 17:47:26.459950 6 log.go:172] (0xc000f86280) (1) Data frame handling I0827 17:47:26.459957 6 log.go:172] (0xc000f86280) (1) Data frame sent I0827 17:47:26.460176 6 log.go:172] (0xc000d9f600) (0xc000f86280) Stream removed, broadcasting: 1 I0827 17:47:26.460244 6 log.go:172] (0xc000d9f600) Go away received I0827 17:47:26.460377 6 log.go:172] (0xc000d9f600) (0xc000f86280) Stream removed, broadcasting: 1 I0827 17:47:26.460428 6 log.go:172] (0xc000d9f600) (0xc0020ca320) Stream removed, broadcasting: 3 I0827 17:47:26.460458 6 log.go:172] (0xc000d9f600) (0xc0020ca3c0) Stream removed, broadcasting: 5 Aug 27 17:47:26.460: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:47:26.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-6dpm9" for this suite. Aug 27 17:47:48.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:47:48.547: INFO: namespace: e2e-tests-pod-network-test-6dpm9, resource: bindings, ignored listing per whitelist Aug 27 17:47:48.621: INFO: namespace e2e-tests-pod-network-test-6dpm9 deletion completed in 22.155608237s • [SLOW TEST:47.265 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:47:48.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 17:47:48.806: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6bf953bb-e88d-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-h6p52" to be "success or failure" Aug 27 17:47:48.866: INFO: Pod "downwardapi-volume-6bf953bb-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 59.801186ms Aug 27 17:47:50.870: INFO: Pod "downwardapi-volume-6bf953bb-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063348799s Aug 27 17:47:52.914: INFO: Pod "downwardapi-volume-6bf953bb-e88d-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107723429s STEP: Saw pod success Aug 27 17:47:52.914: INFO: Pod "downwardapi-volume-6bf953bb-e88d-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:47:52.916: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6bf953bb-e88d-11ea-b58c-0242ac11000b container client-container: STEP: delete the pod Aug 27 17:47:52.956: INFO: Waiting for pod downwardapi-volume-6bf953bb-e88d-11ea-b58c-0242ac11000b to disappear Aug 27 17:47:52.970: INFO: Pod downwardapi-volume-6bf953bb-e88d-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:47:52.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-h6p52" for this suite. Aug 27 17:47:58.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:47:59.001: INFO: namespace: e2e-tests-downward-api-h6p52, resource: bindings, ignored listing per whitelist Aug 27 17:47:59.053: INFO: namespace e2e-tests-downward-api-h6p52 deletion completed in 6.078629096s • [SLOW TEST:10.431 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:47:59.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 27 17:47:59.152: INFO: Waiting up to 5m0s for pod "downward-api-7227b7ee-e88d-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-7kqtn" to be "success or failure" Aug 27 17:47:59.169: INFO: Pod "downward-api-7227b7ee-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.107082ms Aug 27 17:48:01.189: INFO: Pod "downward-api-7227b7ee-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037674719s Aug 27 17:48:03.369: INFO: Pod "downward-api-7227b7ee-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217618299s Aug 27 17:48:05.372: INFO: Pod "downward-api-7227b7ee-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220524972s Aug 27 17:48:07.376: INFO: Pod "downward-api-7227b7ee-e88d-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 8.224566891s Aug 27 17:48:09.681: INFO: Pod "downward-api-7227b7ee-e88d-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.529490237s STEP: Saw pod success Aug 27 17:48:09.681: INFO: Pod "downward-api-7227b7ee-e88d-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:48:10.076: INFO: Trying to get logs from node hunter-worker pod downward-api-7227b7ee-e88d-11ea-b58c-0242ac11000b container dapi-container: STEP: delete the pod Aug 27 17:48:10.602: INFO: Waiting for pod downward-api-7227b7ee-e88d-11ea-b58c-0242ac11000b to disappear Aug 27 17:48:10.746: INFO: Pod downward-api-7227b7ee-e88d-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:48:10.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7kqtn" for this suite. Aug 27 17:48:16.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:48:16.860: INFO: namespace: e2e-tests-downward-api-7kqtn, resource: bindings, ignored listing per whitelist Aug 27 17:48:16.879: INFO: namespace e2e-tests-downward-api-7kqtn deletion completed in 6.128362352s • [SLOW TEST:17.826 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:48:16.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Aug 27 17:48:21.163: INFO: Pod pod-hostip-7cd72d3f-e88d-11ea-b58c-0242ac11000b has hostIP: 172.18.0.2 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:48:21.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6q6x9" for this suite. Aug 27 17:48:43.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:48:43.420: INFO: namespace: e2e-tests-pods-6q6x9, resource: bindings, ignored listing per whitelist Aug 27 17:48:43.482: INFO: namespace e2e-tests-pods-6q6x9 deletion completed in 22.314509124s • [SLOW TEST:26.603 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:48:43.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-8cbc023b-e88d-11ea-b58c-0242ac11000b STEP: Creating secret with name secret-projected-all-test-volume-8cbc0220-e88d-11ea-b58c-0242ac11000b STEP: Creating a pod to test Check all projections for projected volume plugin Aug 27 17:48:43.904: INFO: Waiting up to 5m0s for pod "projected-volume-8cbc01ce-e88d-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-92m8w" to be "success or failure" Aug 27 17:48:44.137: INFO: Pod "projected-volume-8cbc01ce-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 232.996522ms Aug 27 17:48:46.165: INFO: Pod "projected-volume-8cbc01ce-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261189065s Aug 27 17:48:48.169: INFO: Pod "projected-volume-8cbc01ce-e88d-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.264852125s STEP: Saw pod success Aug 27 17:48:48.169: INFO: Pod "projected-volume-8cbc01ce-e88d-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:48:48.172: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-8cbc01ce-e88d-11ea-b58c-0242ac11000b container projected-all-volume-test: STEP: delete the pod Aug 27 17:48:48.190: INFO: Waiting for pod projected-volume-8cbc01ce-e88d-11ea-b58c-0242ac11000b to disappear Aug 27 17:48:48.347: INFO: Pod projected-volume-8cbc01ce-e88d-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:48:48.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-92m8w" for this suite. Aug 27 17:48:56.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:48:56.535: INFO: namespace: e2e-tests-projected-92m8w, resource: bindings, ignored listing per whitelist Aug 27 17:48:56.578: INFO: namespace e2e-tests-projected-92m8w deletion completed in 8.226844311s • [SLOW TEST:13.096 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:48:56.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 27 17:48:57.797: INFO: Waiting up to 5m0s for pod "pod-950b7311-e88d-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-52d6z" to be "success or failure" Aug 27 17:48:57.834: INFO: Pod "pod-950b7311-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.942861ms Aug 27 17:48:59.975: INFO: Pod "pod-950b7311-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177624891s Aug 27 17:49:02.166: INFO: Pod "pod-950b7311-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369174969s Aug 27 17:49:04.170: INFO: Pod "pod-950b7311-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.37262716s Aug 27 17:49:06.174: INFO: Pod "pod-950b7311-e88d-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.376855873s STEP: Saw pod success Aug 27 17:49:06.174: INFO: Pod "pod-950b7311-e88d-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:49:06.177: INFO: Trying to get logs from node hunter-worker2 pod pod-950b7311-e88d-11ea-b58c-0242ac11000b container test-container: STEP: delete the pod Aug 27 17:49:06.222: INFO: Waiting for pod pod-950b7311-e88d-11ea-b58c-0242ac11000b to disappear Aug 27 17:49:06.242: INFO: Pod pod-950b7311-e88d-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:49:06.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-52d6z" for this suite. Aug 27 17:49:12.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:49:12.370: INFO: namespace: e2e-tests-emptydir-52d6z, resource: bindings, ignored listing per whitelist Aug 27 17:49:12.375: INFO: namespace e2e-tests-emptydir-52d6z deletion completed in 6.101809296s • [SLOW TEST:15.796 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:49:12.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 27 17:49:12.737: INFO: Waiting up to 5m0s for pod "pod-9df32cd1-e88d-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-95tkz" to be "success or failure" Aug 27 17:49:12.825: INFO: Pod "pod-9df32cd1-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 88.215007ms Aug 27 17:49:14.829: INFO: Pod "pod-9df32cd1-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092386787s Aug 27 17:49:16.833: INFO: Pod "pod-9df32cd1-e88d-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096362685s STEP: Saw pod success Aug 27 17:49:16.833: INFO: Pod "pod-9df32cd1-e88d-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:49:16.836: INFO: Trying to get logs from node hunter-worker pod pod-9df32cd1-e88d-11ea-b58c-0242ac11000b container test-container: STEP: delete the pod Aug 27 17:49:16.878: INFO: Waiting for pod pod-9df32cd1-e88d-11ea-b58c-0242ac11000b to disappear Aug 27 17:49:16.890: INFO: Pod pod-9df32cd1-e88d-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:49:16.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-95tkz" for this suite. Aug 27 17:49:22.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:49:22.965: INFO: namespace: e2e-tests-emptydir-95tkz, resource: bindings, ignored listing per whitelist Aug 27 17:49:22.968: INFO: namespace e2e-tests-emptydir-95tkz deletion completed in 6.074367508s • [SLOW TEST:10.593 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:49:22.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-a427aa0f-e88d-11ea-b58c-0242ac11000b STEP: Creating a pod to test consume secrets Aug 27 17:49:23.077: INFO: Waiting up to 5m0s for pod "pod-secrets-a42a1695-e88d-11ea-b58c-0242ac11000b" in namespace "e2e-tests-secrets-677gb" to be "success or failure" Aug 27 17:49:23.167: INFO: Pod "pod-secrets-a42a1695-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 89.989934ms Aug 27 17:49:25.257: INFO: Pod "pod-secrets-a42a1695-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180572594s Aug 27 17:49:27.261: INFO: Pod "pod-secrets-a42a1695-e88d-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183744923s STEP: Saw pod success Aug 27 17:49:27.261: INFO: Pod "pod-secrets-a42a1695-e88d-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:49:27.263: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-a42a1695-e88d-11ea-b58c-0242ac11000b container secret-volume-test: STEP: delete the pod Aug 27 17:49:27.285: INFO: Waiting for pod pod-secrets-a42a1695-e88d-11ea-b58c-0242ac11000b to disappear Aug 27 17:49:27.290: INFO: Pod pod-secrets-a42a1695-e88d-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:49:27.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-677gb" for this suite. Aug 27 17:49:37.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:49:37.550: INFO: namespace: e2e-tests-secrets-677gb, resource: bindings, ignored listing per whitelist Aug 27 17:49:37.592: INFO: namespace e2e-tests-secrets-677gb deletion completed in 10.297737198s • [SLOW TEST:14.623 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:49:37.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 17:49:39.617: INFO: Waiting up to 5m0s for pod "downwardapi-volume-adce94f2-e88d-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-j5tj7" to be "success or failure" Aug 27 17:49:39.933: INFO: Pod "downwardapi-volume-adce94f2-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 316.46428ms Aug 27 17:49:41.969: INFO: Pod "downwardapi-volume-adce94f2-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.352634556s Aug 27 17:49:44.221: INFO: Pod "downwardapi-volume-adce94f2-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.603930423s Aug 27 17:49:46.407: INFO: Pod "downwardapi-volume-adce94f2-e88d-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.789927532s STEP: Saw pod success Aug 27 17:49:46.407: INFO: Pod "downwardapi-volume-adce94f2-e88d-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:49:46.409: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-adce94f2-e88d-11ea-b58c-0242ac11000b container client-container: STEP: delete the pod Aug 27 17:49:46.842: INFO: Waiting for pod downwardapi-volume-adce94f2-e88d-11ea-b58c-0242ac11000b to disappear Aug 27 17:49:46.945: INFO: Pod downwardapi-volume-adce94f2-e88d-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:49:46.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j5tj7" for this suite. Aug 27 17:49:55.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:49:55.415: INFO: namespace: e2e-tests-projected-j5tj7, resource: bindings, ignored listing per whitelist Aug 27 17:49:55.603: INFO: namespace e2e-tests-projected-j5tj7 deletion completed in 8.655311025s • [SLOW TEST:18.012 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:49:55.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 27 17:49:57.053: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:49:57.055: INFO: Number of nodes with available pods: 0 Aug 27 17:49:57.055: INFO: Node hunter-worker is running more than one daemon pod Aug 27 17:49:58.144: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:49:58.147: INFO: Number of nodes with available pods: 0 Aug 27 17:49:58.147: INFO: Node hunter-worker is running more than one daemon pod Aug 27 17:49:59.695: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:49:59.711: INFO: Number of nodes with available pods: 0 Aug 27 17:49:59.711: INFO: Node hunter-worker is running more than one daemon pod Aug 27 17:50:00.240: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:00.455: INFO: Number of nodes with available pods: 0 Aug 27 17:50:00.455: INFO: Node hunter-worker is running more than one daemon pod Aug 27 17:50:01.060: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:01.064: INFO: Number of nodes with available pods: 0 Aug 27 17:50:01.064: INFO: Node hunter-worker is running more than one daemon pod Aug 27 17:50:02.456: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:02.508: INFO: Number of nodes with available pods: 0 Aug 27 17:50:02.508: INFO: Node hunter-worker is running more than one daemon pod Aug 27 17:50:03.168: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:03.171: INFO: Number of nodes with available pods: 0 Aug 27 17:50:03.171: INFO: Node hunter-worker is running more than one daemon pod Aug 27 17:50:04.162: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:04.164: INFO: Number of nodes with available pods: 2 Aug 27 17:50:04.164: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 27 17:50:04.185: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:04.187: INFO: Number of nodes with available pods: 1 Aug 27 17:50:04.187: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:05.191: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:05.194: INFO: Number of nodes with available pods: 1 Aug 27 17:50:05.194: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:06.192: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:06.194: INFO: Number of nodes with available pods: 1 Aug 27 17:50:06.194: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:07.192: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:07.195: INFO: Number of nodes with available pods: 1 Aug 27 17:50:07.195: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:08.194: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:08.197: INFO: Number of nodes with available pods: 1 Aug 27 17:50:08.197: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:09.192: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:09.195: INFO: Number of nodes with available pods: 1 Aug 27 17:50:09.195: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:10.192: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:10.195: INFO: Number of nodes with available pods: 1 Aug 27 17:50:10.195: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:11.193: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:11.196: INFO: Number of nodes with available pods: 1 Aug 27 17:50:11.196: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:12.241: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:12.243: INFO: Number of nodes with available pods: 1 Aug 27 17:50:12.243: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:13.193: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:13.196: INFO: Number of nodes with available pods: 1 Aug 27 17:50:13.196: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:14.192: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:14.194: INFO: Number of nodes with available pods: 1 Aug 27 17:50:14.194: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:15.192: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:15.194: INFO: Number of nodes with available pods: 1 Aug 27 17:50:15.194: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:16.241: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:16.244: INFO: Number of nodes with available pods: 1 Aug 27 17:50:16.244: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:17.191: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:17.194: INFO: Number of nodes with available pods: 1 Aug 27 17:50:17.194: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:18.192: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:18.195: INFO: Number of nodes with available pods: 1 Aug 27 17:50:18.195: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:19.191: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:19.196: INFO: Number of nodes with available pods: 1 Aug 27 17:50:19.196: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:20.191: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:20.195: INFO: Number of nodes with available pods: 1 Aug 27 17:50:20.195: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:21.318: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:21.322: INFO: Number of nodes with available pods: 1 Aug 27 17:50:21.322: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:22.307: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:22.311: INFO: Number of nodes with available pods: 1 Aug 27 17:50:22.311: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:23.193: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:23.197: INFO: Number of nodes with available pods: 1 Aug 27 17:50:23.197: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:24.193: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:24.195: INFO: Number of nodes with available pods: 1 Aug 27 17:50:24.195: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 17:50:25.504: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 17:50:25.665: INFO: Number of nodes with available pods: 2 Aug 27 17:50:25.665: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-zf9qh, will wait for the garbage collector to delete the pods Aug 27 17:50:25.724: INFO: Deleting DaemonSet.extensions daemon-set took: 5.908346ms Aug 27 17:50:25.825: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.239934ms Aug 27 17:50:38.503: INFO: Number of nodes with available pods: 0 Aug 27 17:50:38.503: INFO: Number of running nodes: 0, number of available pods: 0 Aug 27 17:50:38.508: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zf9qh/daemonsets","resourceVersion":"2690077"},"items":null} Aug 27 17:50:38.511: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zf9qh/pods","resourceVersion":"2690077"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:50:38.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-zf9qh" for this suite. Aug 27 17:50:44.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:50:44.648: INFO: namespace: e2e-tests-daemonsets-zf9qh, resource: bindings, ignored listing per whitelist Aug 27 17:50:44.682: INFO: namespace e2e-tests-daemonsets-zf9qh deletion completed in 6.160915906s • [SLOW TEST:49.078 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:50:44.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-mz479 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-mz479 STEP: Deleting pre-stop pod Aug 27 17:50:57.878: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:50:57.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-mz479" for this suite. Aug 27 17:51:41.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:51:42.009: INFO: namespace: e2e-tests-prestop-mz479, resource: bindings, ignored listing per whitelist Aug 27 17:51:42.057: INFO: namespace e2e-tests-prestop-mz479 deletion completed in 44.108948153s • [SLOW TEST:57.375 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:51:42.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 17:51:42.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f76088eb-e88d-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-p696s" to be "success or failure" Aug 27 17:51:43.068: INFO: Pod "downwardapi-volume-f76088eb-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 408.317711ms Aug 27 17:51:45.072: INFO: Pod "downwardapi-volume-f76088eb-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.4121752s Aug 27 17:51:47.076: INFO: Pod "downwardapi-volume-f76088eb-e88d-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.416650236s Aug 27 17:51:49.080: INFO: Pod "downwardapi-volume-f76088eb-e88d-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 6.420286509s Aug 27 17:51:51.163: INFO: Pod "downwardapi-volume-f76088eb-e88d-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.503505836s STEP: Saw pod success Aug 27 17:51:51.163: INFO: Pod "downwardapi-volume-f76088eb-e88d-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:51:51.166: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f76088eb-e88d-11ea-b58c-0242ac11000b container client-container: STEP: delete the pod Aug 27 17:51:51.405: INFO: Waiting for pod downwardapi-volume-f76088eb-e88d-11ea-b58c-0242ac11000b to disappear Aug 27 17:51:51.731: INFO: Pod downwardapi-volume-f76088eb-e88d-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:51:51.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-p696s" for this suite. Aug 27 17:52:00.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:52:00.208: INFO: namespace: e2e-tests-downward-api-p696s, resource: bindings, ignored listing per whitelist Aug 27 17:52:00.219: INFO: namespace e2e-tests-downward-api-p696s deletion completed in 8.483730572s • [SLOW TEST:18.162 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:52:00.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 27 17:52:12.869: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 17:52:12.888: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 17:52:14.888: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 17:52:14.892: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 17:52:16.888: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 17:52:16.892: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 17:52:18.888: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 17:52:18.892: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 17:52:20.889: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 17:52:20.907: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 17:52:22.888: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 17:52:22.892: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 17:52:24.889: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 17:52:24.893: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 17:52:26.888: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 17:52:26.892: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 17:52:28.889: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 17:52:28.893: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 17:52:30.888: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 17:52:30.892: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 17:52:32.888: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 17:52:32.892: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:52:32.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dbscg" for this suite. Aug 27 17:52:56.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:52:57.003: INFO: namespace: e2e-tests-container-lifecycle-hook-dbscg, resource: bindings, ignored listing per whitelist Aug 27 17:52:57.005: INFO: namespace e2e-tests-container-lifecycle-hook-dbscg deletion completed in 24.09964436s • [SLOW TEST:56.785 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:52:57.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Aug 27 17:52:57.156: INFO: Waiting up to 5m0s for pod "client-containers-23c7a0d6-e88e-11ea-b58c-0242ac11000b" in namespace "e2e-tests-containers-tcf6x" to be "success or failure" Aug 27 17:52:57.171: INFO: Pod "client-containers-23c7a0d6-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.394169ms Aug 27 17:52:59.176: INFO: Pod "client-containers-23c7a0d6-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019821803s Aug 27 17:53:01.180: INFO: Pod "client-containers-23c7a0d6-e88e-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023927995s STEP: Saw pod success Aug 27 17:53:01.180: INFO: Pod "client-containers-23c7a0d6-e88e-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:53:01.182: INFO: Trying to get logs from node hunter-worker pod client-containers-23c7a0d6-e88e-11ea-b58c-0242ac11000b container test-container: STEP: delete the pod Aug 27 17:53:01.234: INFO: Waiting for pod client-containers-23c7a0d6-e88e-11ea-b58c-0242ac11000b to disappear Aug 27 17:53:01.240: INFO: Pod client-containers-23c7a0d6-e88e-11ea-b58c-0242ac11000b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:53:01.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-tcf6x" for this suite. Aug 27 17:53:07.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:53:07.326: INFO: namespace: e2e-tests-containers-tcf6x, resource: bindings, ignored listing per whitelist Aug 27 17:53:07.364: INFO: namespace e2e-tests-containers-tcf6x deletion completed in 6.120928394s • [SLOW TEST:10.359 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:53:07.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 17:53:07.479: INFO: Creating deployment "nginx-deployment" Aug 27 17:53:07.486: INFO: Waiting for observed generation 1 Aug 27 17:53:09.625: INFO: Waiting for all required pods to come up Aug 27 17:53:09.630: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 27 17:53:21.638: INFO: Waiting for deployment "nginx-deployment" to complete Aug 27 17:53:21.642: INFO: Updating deployment "nginx-deployment" with a non-existent image Aug 27 17:53:21.647: INFO: Updating deployment nginx-deployment Aug 27 17:53:21.647: INFO: Waiting for observed generation 2 Aug 27 17:53:23.990: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 27 17:53:23.993: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 27 17:53:23.995: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 27 17:53:24.002: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 27 17:53:24.002: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 27 17:53:24.004: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 27 17:53:24.008: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Aug 27 17:53:24.008: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Aug 27 17:53:24.013: INFO: Updating deployment nginx-deployment Aug 27 17:53:24.013: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Aug 27 17:53:24.508: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 27 17:53:24.697: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 27 17:53:24.970: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-9vtll,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9vtll/deployments/nginx-deployment,UID:29efd458-e88e-11ea-a485-0242ac120004,ResourceVersion:2690740,Generation:3,CreationTimestamp:2020-08-27 17:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-08-27 17:53:22 +0000 UTC 2020-08-27 17:53:07 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-08-27 17:53:24 +0000 UTC 2020-08-27 17:53:24 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Aug 27 17:53:25.500: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-9vtll,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9vtll/replicasets/nginx-deployment-5c98f8fb5,UID:3261adb4-e88e-11ea-a485-0242ac120004,ResourceVersion:2690716,Generation:3,CreationTimestamp:2020-08-27 17:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 29efd458-e88e-11ea-a485-0242ac120004 0xc00205a5b7 0xc00205a5b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 27 17:53:25.500: INFO: All old ReplicaSets of Deployment "nginx-deployment": Aug 27 17:53:25.501: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-9vtll,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9vtll/replicasets/nginx-deployment-85ddf47c5d,UID:29f1d101-e88e-11ea-a485-0242ac120004,ResourceVersion:2690752,Generation:3,CreationTimestamp:2020-08-27 17:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 29efd458-e88e-11ea-a485-0242ac120004 0xc00205a677 0xc00205a678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Aug 27 17:53:25.724: INFO: Pod "nginx-deployment-5c98f8fb5-6k5ps" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6k5ps,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-6k5ps,UID:345d0fec-e88e-11ea-a485-0242ac120004,ResourceVersion:2690763,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d90a7 0xc0018d90a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9120} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:25 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.724: INFO: Pod "nginx-deployment-5c98f8fb5-9xf2s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9xf2s,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-9xf2s,UID:3264bddd-e88e-11ea-a485-0242ac120004,ResourceVersion:2690682,Generation:0,CreationTimestamp:2020-08-27 17:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d91b7 0xc0018d91b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9230} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-27 17:53:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.724: INFO: Pod "nginx-deployment-5c98f8fb5-b4x78" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b4x78,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-b4x78,UID:32718c39-e88e-11ea-a485-0242ac120004,ResourceVersion:2690702,Generation:0,CreationTimestamp:2020-08-27 17:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d9310 0xc0018d9311}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9390} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d93b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-27 17:53:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.724: INFO: Pod "nginx-deployment-5c98f8fb5-fc2lh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fc2lh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-fc2lh,UID:348a8532-e88e-11ea-a485-0242ac120004,ResourceVersion:2690772,Generation:0,CreationTimestamp:2020-08-27 17:53:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d9470 0xc0018d9471}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d94f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:25 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.724: INFO: Pod "nginx-deployment-5c98f8fb5-fzfl7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fzfl7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-fzfl7,UID:344f9493-e88e-11ea-a485-0242ac120004,ResourceVersion:2690747,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d9587 0xc0018d9588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9600} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.724: INFO: Pod "nginx-deployment-5c98f8fb5-hs7ws" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hs7ws,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-hs7ws,UID:34335645-e88e-11ea-a485-0242ac120004,ResourceVersion:2690743,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d9697 0xc0018d9698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9710} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.724: INFO: Pod "nginx-deployment-5c98f8fb5-jrd76" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jrd76,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-jrd76,UID:345d0d4b-e88e-11ea-a485-0242ac120004,ResourceVersion:2690762,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d97a7 0xc0018d97a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9820} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:25 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.725: INFO: Pod "nginx-deployment-5c98f8fb5-m2kb6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-m2kb6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-m2kb6,UID:329759a1-e88e-11ea-a485-0242ac120004,ResourceVersion:2690705,Generation:0,CreationTimestamp:2020-08-27 17:53:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d98b7 0xc0018d98b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9930} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-27 17:53:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.725: INFO: Pod "nginx-deployment-5c98f8fb5-p2qj2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p2qj2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-p2qj2,UID:32a2520f-e88e-11ea-a485-0242ac120004,ResourceVersion:2690709,Generation:0,CreationTimestamp:2020-08-27 17:53:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d9a10 0xc0018d9a11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9a90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-27 17:53:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.725: INFO: Pod "nginx-deployment-5c98f8fb5-q22gn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-q22gn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-q22gn,UID:344fb877-e88e-11ea-a485-0242ac120004,ResourceVersion:2690753,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d9b70 0xc0018d9b71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.725: INFO: Pod "nginx-deployment-5c98f8fb5-rw5fx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rw5fx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-rw5fx,UID:32718b2a-e88e-11ea-a485-0242ac120004,ResourceVersion:2690684,Generation:0,CreationTimestamp:2020-08-27 17:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d9c97 0xc0018d9c98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-27 17:53:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.725: INFO: Pod "nginx-deployment-5c98f8fb5-sddfn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sddfn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-sddfn,UID:345cfed1-e88e-11ea-a485-0242ac120004,ResourceVersion:2690760,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d9df0 0xc0018d9df1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:25 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.725: INFO: Pod "nginx-deployment-5c98f8fb5-shz5w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-shz5w,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-5c98f8fb5-shz5w,UID:345d0b33-e88e-11ea-a485-0242ac120004,ResourceVersion:2690761,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3261adb4-e88e-11ea-a485-0242ac120004 0xc0018d9f07 0xc0018d9f08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018d9f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018d9fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:25 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.725: INFO: Pod "nginx-deployment-85ddf47c5d-2sbxc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2sbxc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-2sbxc,UID:34334eef-e88e-11ea-a485-0242ac120004,ResourceVersion:2690745,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002132017 0xc002132018}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132090} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021320b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.725: INFO: Pod "nginx-deployment-85ddf47c5d-2tbzc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2tbzc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-2tbzc,UID:29f7b905-e88e-11ea-a485-0242ac120004,ResourceVersion:2690636,Generation:0,CreationTimestamp:2020-08-27 17:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002132127 0xc002132128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021321a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021321c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.239,StartTime:2020-08-27 17:53:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 17:53:18 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6362cb20b4170b5d410d1347d3fd07d4e0fe0e59d3dd40dc458f22b287f8fe3f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.725: INFO: Pod "nginx-deployment-85ddf47c5d-44z2g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-44z2g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-44z2g,UID:344fba36-e88e-11ea-a485-0242ac120004,ResourceVersion:2690751,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002132287 0xc002132288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132300} {node.kubernetes.io/unreachable Exists NoExecute 0xc002132320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.725: INFO: Pod "nginx-deployment-85ddf47c5d-54rcx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-54rcx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-54rcx,UID:34166c57-e88e-11ea-a485-0242ac120004,ResourceVersion:2690774,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002132397 0xc002132398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132410} {node.kubernetes.io/unreachable Exists NoExecute 0xc002132430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-27 17:53:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.726: INFO: Pod "nginx-deployment-85ddf47c5d-6tp9x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6tp9x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-6tp9x,UID:34165f93-e88e-11ea-a485-0242ac120004,ResourceVersion:2690773,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc0021324e7 0xc0021324e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132560} {node.kubernetes.io/unreachable Exists NoExecute 0xc002132580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-27 17:53:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.726: INFO: Pod "nginx-deployment-85ddf47c5d-7d49l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7d49l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-7d49l,UID:3415d3e3-e88e-11ea-a485-0242ac120004,ResourceVersion:2690739,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002132637 0xc002132638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021326b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021326d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-27 17:53:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.726: INFO: Pod "nginx-deployment-85ddf47c5d-cd7vj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cd7vj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-cd7vj,UID:2a000c30-e88e-11ea-a485-0242ac120004,ResourceVersion:2690644,Generation:0,CreationTimestamp:2020-08-27 17:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002132787 0xc002132788}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132800} {node.kubernetes.io/unreachable Exists NoExecute 0xc002132820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.125,StartTime:2020-08-27 17:53:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 17:53:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://da6b060299d6afe467d3fa3261439ede8b61bb3e50fd1dc9eb81714654420766}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.726: INFO: Pod "nginx-deployment-85ddf47c5d-dxlh9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dxlh9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-dxlh9,UID:344fb7a2-e88e-11ea-a485-0242ac120004,ResourceVersion:2690748,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc0021328e7 0xc0021328e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132960} {node.kubernetes.io/unreachable Exists NoExecute 0xc002132980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.726: INFO: Pod "nginx-deployment-85ddf47c5d-fj9pf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fj9pf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-fj9pf,UID:344fdd03-e88e-11ea-a485-0242ac120004,ResourceVersion:2690756,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc0021329f7 0xc0021329f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002132a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.726: INFO: Pod "nginx-deployment-85ddf47c5d-jj57b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jj57b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-jj57b,UID:2a14d4f4-e88e-11ea-a485-0242ac120004,ResourceVersion:2690626,Generation:0,CreationTimestamp:2020-08-27 17:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002132b07 0xc002132b08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132b80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002132ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.123,StartTime:2020-08-27 17:53:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 17:53:18 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f93e614f04b039ba08143637f3176ccca863b79c12c5349854c81504bbd140e0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.726: INFO: Pod "nginx-deployment-85ddf47c5d-jlkrj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jlkrj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-jlkrj,UID:29f70a65-e88e-11ea-a485-0242ac120004,ResourceVersion:2690608,Generation:0,CreationTimestamp:2020-08-27 17:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002132c67 0xc002132c68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002132d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.238,StartTime:2020-08-27 17:53:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 17:53:14 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://66a818bef85435189892296e651963f180a2d08de42a2158240b09ce6f7405f3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.726: INFO: Pod "nginx-deployment-85ddf47c5d-klm79" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-klm79,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-klm79,UID:34334501-e88e-11ea-a485-0242ac120004,ResourceVersion:2690737,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002132dc7 0xc002132dc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132e40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002132e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.726: INFO: Pod "nginx-deployment-85ddf47c5d-kmpnz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kmpnz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-kmpnz,UID:344fcfa9-e88e-11ea-a485-0242ac120004,ResourceVersion:2690750,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002132ed7 0xc002132ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132f50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002132f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.727: INFO: Pod "nginx-deployment-85ddf47c5d-kpt5l" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kpt5l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-kpt5l,UID:2a0034a7-e88e-11ea-a485-0242ac120004,ResourceVersion:2690620,Generation:0,CreationTimestamp:2020-08-27 17:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002132fe7 0xc002132fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002133060} {node.kubernetes.io/unreachable Exists NoExecute 0xc002133080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.122,StartTime:2020-08-27 17:53:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 17:53:16 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://497752537f84da4ba686100f18fbedd05a527a725fae674007f03f1acc41b5e4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.727: INFO: Pod "nginx-deployment-85ddf47c5d-mhwpb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mhwpb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-mhwpb,UID:29f7b2ac-e88e-11ea-a485-0242ac120004,ResourceVersion:2690586,Generation:0,CreationTimestamp:2020-08-27 17:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002133147 0xc002133148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021331c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021331e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.121,StartTime:2020-08-27 17:53:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 17:53:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://def7acb0fc6416c12938f7b1ae46f6217588dd1c8cade144c2da07fc28e56720}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.727: INFO: Pod "nginx-deployment-85ddf47c5d-nznbg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nznbg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-nznbg,UID:34333ff0-e88e-11ea-a485-0242ac120004,ResourceVersion:2690742,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc0021332a7 0xc0021332a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002133320} {node.kubernetes.io/unreachable Exists NoExecute 0xc002133340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.727: INFO: Pod "nginx-deployment-85ddf47c5d-pvlsb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pvlsb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-pvlsb,UID:344fc8e0-e88e-11ea-a485-0242ac120004,ResourceVersion:2690749,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc0021333b7 0xc0021333b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002133430} {node.kubernetes.io/unreachable Exists NoExecute 0xc002133450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.729: INFO: Pod "nginx-deployment-85ddf47c5d-x2jgb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x2jgb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-x2jgb,UID:2a14df1e-e88e-11ea-a485-0242ac120004,ResourceVersion:2690650,Generation:0,CreationTimestamp:2020-08-27 17:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc0021334c7 0xc0021334c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002133540} {node.kubernetes.io/unreachable Exists NoExecute 0xc002133560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.242,StartTime:2020-08-27 17:53:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 17:53:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://22ec2cf868baab1d7be49bd0fe4a97c0ab4e28eab90b77a524d0f17ece3f6e74}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.730: INFO: Pod "nginx-deployment-85ddf47c5d-x747p" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x747p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-x747p,UID:2a00221a-e88e-11ea-a485-0242ac120004,ResourceVersion:2690629,Generation:0,CreationTimestamp:2020-08-27 17:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002133627 0xc002133628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021336a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021336c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:07 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.241,StartTime:2020-08-27 17:53:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 17:53:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://80c8c271e3d8c9b0a18a99ea33dd3d38fbdf810bb2759f2c9f4d09167d384c4d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 17:53:25.730: INFO: Pod "nginx-deployment-85ddf47c5d-zswd2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zswd2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vtll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vtll/pods/nginx-deployment-85ddf47c5d-zswd2,UID:34336d61-e88e-11ea-a485-0242ac120004,ResourceVersion:2690744,Generation:0,CreationTimestamp:2020-08-27 17:53:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29f1d101-e88e-11ea-a485-0242ac120004 0xc002133787 0xc002133788}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75l74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75l74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75l74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002133800} {node.kubernetes.io/unreachable Exists NoExecute 0xc002133820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 17:53:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:53:25.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-9vtll" for this suite. Aug 27 17:54:10.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:54:10.701: INFO: namespace: e2e-tests-deployment-9vtll, resource: bindings, ignored listing per whitelist Aug 27 17:54:10.764: INFO: namespace e2e-tests-deployment-9vtll deletion completed in 44.855829835s • [SLOW TEST:63.399 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:54:10.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 27 17:54:12.278: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 27 17:54:17.282: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:54:18.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-l4zfk" for this suite. Aug 27 17:54:29.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:54:29.748: INFO: namespace: e2e-tests-replication-controller-l4zfk, resource: bindings, ignored listing per whitelist Aug 27 17:54:29.762: INFO: namespace e2e-tests-replication-controller-l4zfk deletion completed in 11.145251211s • [SLOW TEST:18.998 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:54:29.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 17:54:30.305: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:54:31.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-sp6gh" for this suite. Aug 27 17:54:40.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:54:41.009: INFO: namespace: e2e-tests-custom-resource-definition-sp6gh, resource: bindings, ignored listing per whitelist Aug 27 17:54:41.048: INFO: namespace e2e-tests-custom-resource-definition-sp6gh deletion completed in 9.095641898s • [SLOW TEST:11.287 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:54:41.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Aug 27 17:54:41.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-m4hnq run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Aug 27 17:55:19.622: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0827 17:55:17.727005 676 log.go:172] (0xc000312160) (0xc00065bb80) Create stream\nI0827 17:55:17.727035 676 log.go:172] (0xc000312160) (0xc00065bb80) Stream added, broadcasting: 1\nI0827 17:55:17.729442 676 log.go:172] (0xc000312160) Reply frame received for 1\nI0827 17:55:17.729472 676 log.go:172] (0xc000312160) (0xc00065bc20) Create stream\nI0827 17:55:17.729479 676 log.go:172] (0xc000312160) (0xc00065bc20) Stream added, broadcasting: 3\nI0827 17:55:17.730173 676 log.go:172] (0xc000312160) Reply frame received for 3\nI0827 17:55:17.730203 676 log.go:172] (0xc000312160) (0xc00065bcc0) Create stream\nI0827 17:55:17.730213 676 log.go:172] (0xc000312160) (0xc00065bcc0) Stream added, broadcasting: 5\nI0827 17:55:17.730870 676 log.go:172] (0xc000312160) Reply frame received for 5\nI0827 17:55:17.730900 676 log.go:172] (0xc000312160) (0xc00065bd60) Create stream\nI0827 17:55:17.730914 676 log.go:172] (0xc000312160) (0xc00065bd60) Stream added, broadcasting: 7\nI0827 17:55:17.731605 676 log.go:172] (0xc000312160) Reply frame received for 7\nI0827 17:55:17.731747 676 log.go:172] (0xc00065bc20) (3) Writing data frame\nI0827 17:55:17.731841 676 log.go:172] (0xc00065bc20) (3) Writing data frame\nI0827 17:55:17.732706 676 log.go:172] (0xc000312160) Data frame received for 5\nI0827 17:55:17.732892 676 log.go:172] (0xc00065bcc0) (5) Data frame handling\nI0827 17:55:17.732916 676 log.go:172] (0xc00065bcc0) (5) Data frame sent\nI0827 17:55:17.733204 676 log.go:172] (0xc000312160) Data frame received for 5\nI0827 17:55:17.733214 676 log.go:172] (0xc00065bcc0) (5) Data frame handling\nI0827 17:55:17.733219 676 log.go:172] (0xc00065bcc0) (5) Data frame sent\nI0827 17:55:17.766999 676 log.go:172] (0xc000312160) Data frame received for 7\nI0827 17:55:17.767119 676 log.go:172] (0xc00065bd60) (7) Data frame handling\nI0827 17:55:17.767160 676 log.go:172] (0xc000312160) Data frame received for 5\nI0827 17:55:17.767176 676 log.go:172] (0xc00065bcc0) (5) Data frame handling\nI0827 17:55:17.767343 676 log.go:172] (0xc000312160) Data frame received for 1\nI0827 17:55:17.767378 676 log.go:172] (0xc00065bb80) (1) Data frame handling\nI0827 17:55:17.767401 676 log.go:172] (0xc00065bb80) (1) Data frame sent\nI0827 17:55:17.767450 676 log.go:172] (0xc000312160) (0xc00065bb80) Stream removed, broadcasting: 1\nI0827 17:55:17.767572 676 log.go:172] (0xc000312160) (0xc00065bb80) Stream removed, broadcasting: 1\nI0827 17:55:17.767600 676 log.go:172] (0xc000312160) (0xc00065bc20) Stream removed, broadcasting: 3\nI0827 17:55:17.767617 676 log.go:172] (0xc000312160) (0xc00065bcc0) Stream removed, broadcasting: 5\nI0827 17:55:17.767926 676 log.go:172] (0xc000312160) (0xc00065bd60) Stream removed, broadcasting: 7\nI0827 17:55:17.768159 676 log.go:172] (0xc000312160) Go away received\n" Aug 27 17:55:19.622: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:55:21.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m4hnq" for this suite. Aug 27 17:55:34.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:55:36.512: INFO: namespace: e2e-tests-kubectl-m4hnq, resource: bindings, ignored listing per whitelist Aug 27 17:55:36.518: INFO: namespace e2e-tests-kubectl-m4hnq deletion completed in 14.73558968s • [SLOW TEST:55.469 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:55:36.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-84012e48-e88e-11ea-b58c-0242ac11000b STEP: Creating a pod to test consume configMaps Aug 27 17:55:38.851: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-840c662c-e88e-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-5ftp6" to be "success or failure" Aug 27 17:55:38.884: INFO: Pod "pod-projected-configmaps-840c662c-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.131285ms Aug 27 17:55:41.136: INFO: Pod "pod-projected-configmaps-840c662c-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285295782s Aug 27 17:55:43.244: INFO: Pod "pod-projected-configmaps-840c662c-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393186548s Aug 27 17:55:46.020: INFO: Pod "pod-projected-configmaps-840c662c-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.169442666s Aug 27 17:55:48.352: INFO: Pod "pod-projected-configmaps-840c662c-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.500917594s Aug 27 17:55:50.356: INFO: Pod "pod-projected-configmaps-840c662c-e88e-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.504913143s STEP: Saw pod success Aug 27 17:55:50.356: INFO: Pod "pod-projected-configmaps-840c662c-e88e-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:55:50.359: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-840c662c-e88e-11ea-b58c-0242ac11000b container projected-configmap-volume-test: STEP: delete the pod Aug 27 17:55:51.032: INFO: Waiting for pod pod-projected-configmaps-840c662c-e88e-11ea-b58c-0242ac11000b to disappear Aug 27 17:55:51.094: INFO: Pod pod-projected-configmaps-840c662c-e88e-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:55:51.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5ftp6" for this suite. Aug 27 17:55:57.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:55:57.303: INFO: namespace: e2e-tests-projected-5ftp6, resource: bindings, ignored listing per whitelist Aug 27 17:55:57.316: INFO: namespace e2e-tests-projected-5ftp6 deletion completed in 6.2148985s • [SLOW TEST:20.798 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:55:57.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0827 17:55:58.630580 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 27 17:55:58.630: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:55:58.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9pt4d" for this suite. Aug 27 17:56:04.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:56:04.718: INFO: namespace: e2e-tests-gc-9pt4d, resource: bindings, ignored listing per whitelist Aug 27 17:56:04.732: INFO: namespace e2e-tests-gc-9pt4d deletion completed in 6.097050537s • [SLOW TEST:7.416 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:56:04.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-93a69e0f-e88e-11ea-b58c-0242ac11000b STEP: Creating a pod to test consume configMaps Aug 27 17:56:04.863: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93a8fc57-e88e-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-xjmqx" to be "success or failure" Aug 27 17:56:04.866: INFO: Pod "pod-projected-configmaps-93a8fc57-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.624751ms Aug 27 17:56:06.871: INFO: Pod "pod-projected-configmaps-93a8fc57-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008016811s Aug 27 17:56:08.915: INFO: Pod "pod-projected-configmaps-93a8fc57-e88e-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052005402s STEP: Saw pod success Aug 27 17:56:08.915: INFO: Pod "pod-projected-configmaps-93a8fc57-e88e-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:56:08.921: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-93a8fc57-e88e-11ea-b58c-0242ac11000b container projected-configmap-volume-test: STEP: delete the pod Aug 27 17:56:08.955: INFO: Waiting for pod pod-projected-configmaps-93a8fc57-e88e-11ea-b58c-0242ac11000b to disappear Aug 27 17:56:08.972: INFO: Pod pod-projected-configmaps-93a8fc57-e88e-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:56:08.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xjmqx" for this suite. Aug 27 17:56:15.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:56:15.296: INFO: namespace: e2e-tests-projected-xjmqx, resource: bindings, ignored listing per whitelist Aug 27 17:56:15.340: INFO: namespace e2e-tests-projected-xjmqx deletion completed in 6.365214222s • [SLOW TEST:10.608 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:56:15.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 27 17:56:15.495: INFO: Waiting up to 5m0s for pod "downward-api-99fd4bb2-e88e-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-8pnm6" to be "success or failure" Aug 27 17:56:15.591: INFO: Pod "downward-api-99fd4bb2-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 95.579313ms Aug 27 17:56:17.595: INFO: Pod "downward-api-99fd4bb2-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099873907s Aug 27 17:56:19.729: INFO: Pod "downward-api-99fd4bb2-e88e-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.233654758s STEP: Saw pod success Aug 27 17:56:19.729: INFO: Pod "downward-api-99fd4bb2-e88e-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:56:19.732: INFO: Trying to get logs from node hunter-worker pod downward-api-99fd4bb2-e88e-11ea-b58c-0242ac11000b container dapi-container: STEP: delete the pod Aug 27 17:56:19.754: INFO: Waiting for pod downward-api-99fd4bb2-e88e-11ea-b58c-0242ac11000b to disappear Aug 27 17:56:19.764: INFO: Pod downward-api-99fd4bb2-e88e-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:56:19.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8pnm6" for this suite. Aug 27 17:56:25.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:56:25.792: INFO: namespace: e2e-tests-downward-api-8pnm6, resource: bindings, ignored listing per whitelist Aug 27 17:56:25.845: INFO: namespace e2e-tests-downward-api-8pnm6 deletion completed in 6.077169971s • [SLOW TEST:10.505 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:56:25.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 17:56:26.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0cf826c-e88e-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-7ckhr" to be "success or failure" Aug 27 17:56:26.999: INFO: Pod "downwardapi-volume-a0cf826c-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.140904ms Aug 27 17:56:29.118: INFO: Pod "downwardapi-volume-a0cf826c-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163446041s Aug 27 17:56:31.130: INFO: Pod "downwardapi-volume-a0cf826c-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175155493s Aug 27 17:56:33.179: INFO: Pod "downwardapi-volume-a0cf826c-e88e-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223516722s Aug 27 17:56:35.187: INFO: Pod "downwardapi-volume-a0cf826c-e88e-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.23241884s STEP: Saw pod success Aug 27 17:56:35.188: INFO: Pod "downwardapi-volume-a0cf826c-e88e-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 17:56:35.190: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a0cf826c-e88e-11ea-b58c-0242ac11000b container client-container: STEP: delete the pod Aug 27 17:56:35.777: INFO: Waiting for pod downwardapi-volume-a0cf826c-e88e-11ea-b58c-0242ac11000b to disappear Aug 27 17:56:35.879: INFO: Pod downwardapi-volume-a0cf826c-e88e-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:56:35.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7ckhr" for this suite. Aug 27 17:56:43.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:56:43.958: INFO: namespace: e2e-tests-downward-api-7ckhr, resource: bindings, ignored listing per whitelist Aug 27 17:56:43.970: INFO: namespace e2e-tests-downward-api-7ckhr deletion completed in 8.087970746s • [SLOW TEST:18.125 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:56:43.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-n95x7 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n95x7 to expose endpoints map[] Aug 27 17:56:45.717: INFO: Get endpoints failed (706.724318ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Aug 27 17:56:46.723: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n95x7 exposes endpoints map[] (1.711748832s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-n95x7 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n95x7 to expose endpoints map[pod1:[100]] Aug 27 17:56:53.174: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (6.447277371s elapsed, will retry) Aug 27 17:56:57.348: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n95x7 exposes endpoints map[pod1:[100]] (10.620850852s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-n95x7 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n95x7 to expose endpoints map[pod1:[100] pod2:[101]] Aug 27 17:57:02.249: INFO: Unexpected endpoints: found map[ac9e005f-e88e-11ea-a485-0242ac120004:[100]], expected map[pod1:[100] pod2:[101]] (4.897084896s elapsed, will retry) Aug 27 17:57:04.716: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n95x7 exposes endpoints map[pod2:[101] pod1:[100]] (7.36440468s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-n95x7 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n95x7 to expose endpoints map[pod2:[101]] Aug 27 17:57:06.633: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n95x7 exposes endpoints map[pod2:[101]] (1.912862922s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-n95x7 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n95x7 to expose endpoints map[] Aug 27 17:57:09.290: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n95x7 exposes endpoints map[] (2.042956151s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:57:10.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-n95x7" for this suite. Aug 27 17:57:41.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:57:41.321: INFO: namespace: e2e-tests-services-n95x7, resource: bindings, ignored listing per whitelist Aug 27 17:57:41.381: INFO: namespace e2e-tests-services-n95x7 deletion completed in 30.555708765s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:57.411 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:57:41.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 27 17:57:42.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-p64nb' Aug 27 17:57:42.358: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 27 17:57:42.358: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Aug 27 17:57:47.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-p64nb' Aug 27 17:57:47.783: INFO: stderr: "" Aug 27 17:57:47.783: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:57:47.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p64nb" for this suite. Aug 27 17:58:02.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:58:02.508: INFO: namespace: e2e-tests-kubectl-p64nb, resource: bindings, ignored listing per whitelist Aug 27 17:58:02.573: INFO: namespace e2e-tests-kubectl-p64nb deletion completed in 14.440510807s • [SLOW TEST:21.192 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:58:02.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-d9e7228a-e88e-11ea-b58c-0242ac11000b STEP: Creating configMap with name cm-test-opt-upd-d9e722f7-e88e-11ea-b58c-0242ac11000b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d9e7228a-e88e-11ea-b58c-0242ac11000b STEP: Updating configmap cm-test-opt-upd-d9e722f7-e88e-11ea-b58c-0242ac11000b STEP: Creating configMap with name cm-test-opt-create-d9e7234a-e88e-11ea-b58c-0242ac11000b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:58:12.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kzgcd" for this suite. Aug 27 17:58:37.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:58:37.055: INFO: namespace: e2e-tests-configmap-kzgcd, resource: bindings, ignored listing per whitelist Aug 27 17:58:37.109: INFO: namespace e2e-tests-configmap-kzgcd deletion completed in 24.127651137s • [SLOW TEST:34.535 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:58:37.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Aug 27 17:58:37.208: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 17:58:37.229: INFO: Waiting for terminating namespaces to be deleted... Aug 27 17:58:37.231: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Aug 27 17:58:37.238: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 17:58:37.238: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 17:58:37.238: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded) Aug 27 17:58:37.238: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 17:58:37.238: INFO: rally-a0035e6c-0q7zegi3-7f9d59c68-b7x9w from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:15:14 +0000 UTC (1 container statuses recorded) Aug 27 17:58:37.238: INFO: Container rally-a0035e6c-0q7zegi3 ready: true, restart count 92 Aug 27 17:58:37.238: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Aug 27 17:58:37.278: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 17:58:37.278: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 17:58:37.278: INFO: rally-a0035e6c-x0kfgasz-79fb6568cc-vpxdp from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:14:52 +0000 UTC (1 container statuses recorded) Aug 27 17:58:37.278: INFO: Container rally-a0035e6c-x0kfgasz ready: true, restart count 92 Aug 27 17:58:37.278: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 17:58:37.278: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Aug 27 17:58:37.451: INFO: Pod rally-a0035e6c-0q7zegi3-7f9d59c68-b7x9w requesting resource cpu=0m on Node hunter-worker Aug 27 17:58:37.451: INFO: Pod rally-a0035e6c-x0kfgasz-79fb6568cc-vpxdp requesting resource cpu=0m on Node hunter-worker2 Aug 27 17:58:37.451: INFO: Pod kindnet-kvcmt requesting resource cpu=100m on Node hunter-worker Aug 27 17:58:37.451: INFO: Pod kindnet-l4sc5 requesting resource cpu=100m on Node hunter-worker2 Aug 27 17:58:37.451: INFO: Pod kube-proxy-7x47x requesting resource cpu=0m on Node hunter-worker2 Aug 27 17:58:37.451: INFO: Pod kube-proxy-xm64c requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-ee9d74f9-e88e-11ea-b58c-0242ac11000b.162f31c597c41e3a], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-2x7bc/filler-pod-ee9d74f9-e88e-11ea-b58c-0242ac11000b to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee9d74f9-e88e-11ea-b58c-0242ac11000b.162f31c5e802d4a3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee9d74f9-e88e-11ea-b58c-0242ac11000b.162f31c6f02808a2], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee9d74f9-e88e-11ea-b58c-0242ac11000b.162f31c712b29957], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee9e619d-e88e-11ea-b58c-0242ac11000b.162f31c59a682ddb], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-2x7bc/filler-pod-ee9e619d-e88e-11ea-b58c-0242ac11000b to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee9e619d-e88e-11ea-b58c-0242ac11000b.162f31c6735c7f9f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee9e619d-e88e-11ea-b58c-0242ac11000b.162f31c725bb09a7], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee9e619d-e88e-11ea-b58c-0242ac11000b.162f31c736373d1b], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.162f31c78c9454c8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:58:48.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-2x7bc" for this suite. Aug 27 17:59:02.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:59:02.349: INFO: namespace: e2e-tests-sched-pred-2x7bc, resource: bindings, ignored listing per whitelist Aug 27 17:59:02.387: INFO: namespace e2e-tests-sched-pred-2x7bc deletion completed in 13.422224595s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:25.278 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:59:02.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0827 17:59:36.108351 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 27 17:59:36.108: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 17:59:36.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wbl9g" for this suite. Aug 27 17:59:50.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 17:59:50.778: INFO: namespace: e2e-tests-gc-wbl9g, resource: bindings, ignored listing per whitelist Aug 27 17:59:50.785: INFO: namespace e2e-tests-gc-wbl9g deletion completed in 14.675006215s • [SLOW TEST:48.398 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 17:59:50.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 17:59:52.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-zbtmx" to be "success or failure" Aug 27 17:59:52.410: INFO: Pod "downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 396.87407ms Aug 27 17:59:55.063: INFO: Pod "downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.049216419s Aug 27 17:59:57.318: INFO: Pod "downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.304836571s Aug 27 18:00:00.023: INFO: Pod "downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009250952s Aug 27 18:00:02.027: INFO: Pod "downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.013328953s Aug 27 18:00:04.029: INFO: Pod "downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.015755428s Aug 27 18:00:06.271: INFO: Pod "downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.257276931s Aug 27 18:00:08.274: INFO: Pod "downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.26027255s STEP: Saw pod success Aug 27 18:00:08.274: INFO: Pod "downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 18:00:08.276: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b container client-container: STEP: delete the pod Aug 27 18:00:08.923: INFO: Waiting for pod downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b to disappear Aug 27 18:00:09.199: INFO: Pod downwardapi-volume-1ae1b329-e88f-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:00:09.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zbtmx" for this suite. Aug 27 18:00:17.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:00:17.927: INFO: namespace: e2e-tests-projected-zbtmx, resource: bindings, ignored listing per whitelist Aug 27 18:00:17.989: INFO: namespace e2e-tests-projected-zbtmx deletion completed in 8.786263555s • [SLOW TEST:27.204 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:00:17.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-2adc2c4d-e88f-11ea-b58c-0242ac11000b STEP: Creating a pod to test consume configMaps Aug 27 18:00:18.743: INFO: Waiting up to 5m0s for pod "pod-configmaps-2add3ab7-e88f-11ea-b58c-0242ac11000b" in namespace "e2e-tests-configmap-9d9tk" to be "success or failure" Aug 27 18:00:18.796: INFO: Pod "pod-configmaps-2add3ab7-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 53.312833ms Aug 27 18:00:21.180: INFO: Pod "pod-configmaps-2add3ab7-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.43639583s Aug 27 18:00:23.349: INFO: Pod "pod-configmaps-2add3ab7-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.606110272s Aug 27 18:00:25.373: INFO: Pod "pod-configmaps-2add3ab7-e88f-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.629381323s STEP: Saw pod success Aug 27 18:00:25.373: INFO: Pod "pod-configmaps-2add3ab7-e88f-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 18:00:25.382: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-2add3ab7-e88f-11ea-b58c-0242ac11000b container configmap-volume-test: STEP: delete the pod Aug 27 18:00:25.428: INFO: Waiting for pod pod-configmaps-2add3ab7-e88f-11ea-b58c-0242ac11000b to disappear Aug 27 18:00:25.454: INFO: Pod pod-configmaps-2add3ab7-e88f-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:00:25.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9d9tk" for this suite. Aug 27 18:00:31.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:00:31.564: INFO: namespace: e2e-tests-configmap-9d9tk, resource: bindings, ignored listing per whitelist Aug 27 18:00:31.618: INFO: namespace e2e-tests-configmap-9d9tk deletion completed in 6.16062944s • [SLOW TEST:13.628 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:00:31.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9ktkh STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 27 18:00:31.789: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 27 18:01:00.607: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.149:8080/dial?request=hostName&protocol=udp&host=10.244.1.148&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-9ktkh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 27 18:01:00.607: INFO: >>> kubeConfig: /root/.kube/config I0827 18:01:00.630406 6 log.go:172] (0xc000d9f3f0) (0xc000504aa0) Create stream I0827 18:01:00.630430 6 log.go:172] (0xc000d9f3f0) (0xc000504aa0) Stream added, broadcasting: 1 I0827 18:01:00.631834 6 log.go:172] (0xc000d9f3f0) Reply frame received for 1 I0827 18:01:00.631856 6 log.go:172] (0xc000d9f3f0) (0xc000504b40) Create stream I0827 18:01:00.631862 6 log.go:172] (0xc000d9f3f0) (0xc000504b40) Stream added, broadcasting: 3 I0827 18:01:00.632559 6 log.go:172] (0xc000d9f3f0) Reply frame received for 3 I0827 18:01:00.632586 6 log.go:172] (0xc000d9f3f0) (0xc000504be0) Create stream I0827 18:01:00.632595 6 log.go:172] (0xc000d9f3f0) (0xc000504be0) Stream added, broadcasting: 5 I0827 18:01:00.633469 6 log.go:172] (0xc000d9f3f0) Reply frame received for 5 I0827 18:01:00.688989 6 log.go:172] (0xc000d9f3f0) Data frame received for 3 I0827 18:01:00.689079 6 log.go:172] (0xc000504b40) (3) Data frame handling I0827 18:01:00.689133 6 log.go:172] (0xc000504b40) (3) Data frame sent I0827 18:01:00.689288 6 log.go:172] (0xc000d9f3f0) Data frame received for 3 I0827 18:01:00.689308 6 log.go:172] (0xc000504b40) (3) Data frame handling I0827 18:01:00.689456 6 log.go:172] (0xc000d9f3f0) Data frame received for 5 I0827 18:01:00.689477 6 log.go:172] (0xc000504be0) (5) Data frame handling I0827 18:01:00.690881 6 log.go:172] (0xc000d9f3f0) Data frame received for 1 I0827 18:01:00.690892 6 log.go:172] (0xc000504aa0) (1) Data frame handling I0827 18:01:00.690900 6 log.go:172] (0xc000504aa0) (1) Data frame sent I0827 18:01:00.690909 6 log.go:172] (0xc000d9f3f0) (0xc000504aa0) Stream removed, broadcasting: 1 I0827 18:01:00.690918 6 log.go:172] (0xc000d9f3f0) Go away received I0827 18:01:00.691048 6 log.go:172] (0xc000d9f3f0) (0xc000504aa0) Stream removed, broadcasting: 1 I0827 18:01:00.691080 6 log.go:172] (0xc000d9f3f0) (0xc000504b40) Stream removed, broadcasting: 3 I0827 18:01:00.691102 6 log.go:172] (0xc000d9f3f0) (0xc000504be0) Stream removed, broadcasting: 5 Aug 27 18:01:00.691: INFO: Waiting for endpoints: map[] Aug 27 18:01:00.693: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.149:8080/dial?request=hostName&protocol=udp&host=10.244.2.13&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-9ktkh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 27 18:01:00.693: INFO: >>> kubeConfig: /root/.kube/config I0827 18:01:00.724024 6 log.go:172] (0xc001cd2580) (0xc000b59680) Create stream I0827 18:01:00.724048 6 log.go:172] (0xc001cd2580) (0xc000b59680) Stream added, broadcasting: 1 I0827 18:01:00.725875 6 log.go:172] (0xc001cd2580) Reply frame received for 1 I0827 18:01:00.725915 6 log.go:172] (0xc001cd2580) (0xc0020ac460) Create stream I0827 18:01:00.725926 6 log.go:172] (0xc001cd2580) (0xc0020ac460) Stream added, broadcasting: 3 I0827 18:01:00.726646 6 log.go:172] (0xc001cd2580) Reply frame received for 3 I0827 18:01:00.726686 6 log.go:172] (0xc001cd2580) (0xc0020ac500) Create stream I0827 18:01:00.726723 6 log.go:172] (0xc001cd2580) (0xc0020ac500) Stream added, broadcasting: 5 I0827 18:01:00.727448 6 log.go:172] (0xc001cd2580) Reply frame received for 5 I0827 18:01:00.795087 6 log.go:172] (0xc001cd2580) Data frame received for 3 I0827 18:01:00.795112 6 log.go:172] (0xc0020ac460) (3) Data frame handling I0827 18:01:00.795126 6 log.go:172] (0xc0020ac460) (3) Data frame sent I0827 18:01:00.795846 6 log.go:172] (0xc001cd2580) Data frame received for 5 I0827 18:01:00.795887 6 log.go:172] (0xc0020ac500) (5) Data frame handling I0827 18:01:00.795916 6 log.go:172] (0xc001cd2580) Data frame received for 3 I0827 18:01:00.795930 6 log.go:172] (0xc0020ac460) (3) Data frame handling I0827 18:01:00.797143 6 log.go:172] (0xc001cd2580) Data frame received for 1 I0827 18:01:00.797163 6 log.go:172] (0xc000b59680) (1) Data frame handling I0827 18:01:00.797181 6 log.go:172] (0xc000b59680) (1) Data frame sent I0827 18:01:00.797190 6 log.go:172] (0xc001cd2580) (0xc000b59680) Stream removed, broadcasting: 1 I0827 18:01:00.797223 6 log.go:172] (0xc001cd2580) Go away received I0827 18:01:00.797267 6 log.go:172] (0xc001cd2580) (0xc000b59680) Stream removed, broadcasting: 1 I0827 18:01:00.797282 6 log.go:172] (0xc001cd2580) (0xc0020ac460) Stream removed, broadcasting: 3 I0827 18:01:00.797297 6 log.go:172] (0xc001cd2580) (0xc0020ac500) Stream removed, broadcasting: 5 Aug 27 18:01:00.797: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:01:00.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9ktkh" for this suite. Aug 27 18:01:31.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:01:31.503: INFO: namespace: e2e-tests-pod-network-test-9ktkh, resource: bindings, ignored listing per whitelist Aug 27 18:01:31.520: INFO: namespace e2e-tests-pod-network-test-9ktkh deletion completed in 30.719829478s • [SLOW TEST:59.902 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:01:31.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:01:43.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-7v2dx" for this suite. Aug 27 18:01:50.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:01:50.689: INFO: namespace: e2e-tests-namespaces-7v2dx, resource: bindings, ignored listing per whitelist Aug 27 18:01:50.724: INFO: namespace e2e-tests-namespaces-7v2dx deletion completed in 6.356181848s STEP: Destroying namespace "e2e-tests-nsdeletetest-mtl6c" for this suite. Aug 27 18:01:50.726: INFO: Namespace e2e-tests-nsdeletetest-mtl6c was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-kmc47" for this suite. Aug 27 18:01:56.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:01:57.154: INFO: namespace: e2e-tests-nsdeletetest-kmc47, resource: bindings, ignored listing per whitelist Aug 27 18:01:57.162: INFO: namespace e2e-tests-nsdeletetest-kmc47 deletion completed in 6.435830876s • [SLOW TEST:25.641 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:01:57.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 27 18:01:58.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-a,UID:6609fd0d-e88f-11ea-a485-0242ac120004,ResourceVersion:2692490,Generation:0,CreationTimestamp:2020-08-27 18:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 27 18:01:58.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-a,UID:6609fd0d-e88f-11ea-a485-0242ac120004,ResourceVersion:2692490,Generation:0,CreationTimestamp:2020-08-27 18:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 27 18:02:08.056: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-a,UID:6609fd0d-e88f-11ea-a485-0242ac120004,ResourceVersion:2692510,Generation:0,CreationTimestamp:2020-08-27 18:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 27 18:02:08.056: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-a,UID:6609fd0d-e88f-11ea-a485-0242ac120004,ResourceVersion:2692510,Generation:0,CreationTimestamp:2020-08-27 18:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 27 18:02:18.063: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-a,UID:6609fd0d-e88f-11ea-a485-0242ac120004,ResourceVersion:2692530,Generation:0,CreationTimestamp:2020-08-27 18:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 27 18:02:18.063: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-a,UID:6609fd0d-e88f-11ea-a485-0242ac120004,ResourceVersion:2692530,Generation:0,CreationTimestamp:2020-08-27 18:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 27 18:02:28.238: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-a,UID:6609fd0d-e88f-11ea-a485-0242ac120004,ResourceVersion:2692550,Generation:0,CreationTimestamp:2020-08-27 18:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 27 18:02:28.238: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-a,UID:6609fd0d-e88f-11ea-a485-0242ac120004,ResourceVersion:2692550,Generation:0,CreationTimestamp:2020-08-27 18:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 27 18:02:38.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-b,UID:7e22f476-e88f-11ea-a485-0242ac120004,ResourceVersion:2692570,Generation:0,CreationTimestamp:2020-08-27 18:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 27 18:02:38.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-b,UID:7e22f476-e88f-11ea-a485-0242ac120004,ResourceVersion:2692570,Generation:0,CreationTimestamp:2020-08-27 18:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 27 18:02:48.300: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-b,UID:7e22f476-e88f-11ea-a485-0242ac120004,ResourceVersion:2692590,Generation:0,CreationTimestamp:2020-08-27 18:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 27 18:02:48.300: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-hbhm7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbhm7/configmaps/e2e-watch-test-configmap-b,UID:7e22f476-e88f-11ea-a485-0242ac120004,ResourceVersion:2692590,Generation:0,CreationTimestamp:2020-08-27 18:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:02:58.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-hbhm7" for this suite. Aug 27 18:03:04.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:03:04.556: INFO: namespace: e2e-tests-watch-hbhm7, resource: bindings, ignored listing per whitelist Aug 27 18:03:04.556: INFO: namespace e2e-tests-watch-hbhm7 deletion completed in 6.250010249s • [SLOW TEST:67.393 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:03:04.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 18:03:04.905: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 27 18:03:15.398: INFO: Waiting up to 5m0s for pod "pod-941bb251-e88f-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-xxrpr" to be "success or failure"
Aug 27 18:03:15.411: INFO: Pod "pod-941bb251-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.26021ms
Aug 27 18:03:17.466: INFO: Pod "pod-941bb251-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06800753s
Aug 27 18:03:19.592: INFO: Pod "pod-941bb251-e88f-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194102825s
Aug 27 18:03:21.848: INFO: Pod "pod-941bb251-e88f-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 6.449824497s
Aug 27 18:03:23.938: INFO: Pod "pod-941bb251-e88f-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.539497597s
STEP: Saw pod success
Aug 27 18:03:23.938: INFO: Pod "pod-941bb251-e88f-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:03:23.941: INFO: Trying to get logs from node hunter-worker2 pod pod-941bb251-e88f-11ea-b58c-0242ac11000b container test-container: 
STEP: delete the pod
Aug 27 18:03:24.178: INFO: Waiting for pod pod-941bb251-e88f-11ea-b58c-0242ac11000b to disappear
Aug 27 18:03:24.682: INFO: Pod pod-941bb251-e88f-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:03:24.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xxrpr" for this suite.
Aug 27 18:03:35.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:03:35.675: INFO: namespace: e2e-tests-emptydir-xxrpr, resource: bindings, ignored listing per whitelist
Aug 27 18:03:35.698: INFO: namespace e2e-tests-emptydir-xxrpr deletion completed in 11.012074855s

• [SLOW TEST:21.460 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:03:35.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Aug 27 18:03:36.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 27 18:03:36.993: INFO: stderr: ""
Aug 27 18:03:36.994: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:03:36.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wv7vt" for this suite.
Aug 27 18:03:43.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:03:43.600: INFO: namespace: e2e-tests-kubectl-wv7vt, resource: bindings, ignored listing per whitelist
Aug 27 18:03:43.600: INFO: namespace e2e-tests-kubectl-wv7vt deletion completed in 6.603631472s

• [SLOW TEST:7.902 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:03:43.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 27 18:03:45.695: INFO: Pod name wrapped-volume-race-a651c111-e88f-11ea-b58c-0242ac11000b: Found 0 pods out of 5
Aug 27 18:03:50.702: INFO: Pod name wrapped-volume-race-a651c111-e88f-11ea-b58c-0242ac11000b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a651c111-e88f-11ea-b58c-0242ac11000b in namespace e2e-tests-emptydir-wrapper-4kj4b, will wait for the garbage collector to delete the pods
Aug 27 18:06:13.172: INFO: Deleting ReplicationController wrapped-volume-race-a651c111-e88f-11ea-b58c-0242ac11000b took: 6.997583ms
Aug 27 18:06:14.072: INFO: Terminating ReplicationController wrapped-volume-race-a651c111-e88f-11ea-b58c-0242ac11000b pods took: 900.213097ms
STEP: Creating RC which spawns configmap-volume pods
Aug 27 18:06:58.306: INFO: Pod name wrapped-volume-race-1920b0c9-e890-11ea-b58c-0242ac11000b: Found 0 pods out of 5
Aug 27 18:07:03.490: INFO: Pod name wrapped-volume-race-1920b0c9-e890-11ea-b58c-0242ac11000b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1920b0c9-e890-11ea-b58c-0242ac11000b in namespace e2e-tests-emptydir-wrapper-4kj4b, will wait for the garbage collector to delete the pods
Aug 27 18:08:48.538: INFO: Deleting ReplicationController wrapped-volume-race-1920b0c9-e890-11ea-b58c-0242ac11000b took: 5.811616ms
Aug 27 18:08:49.338: INFO: Terminating ReplicationController wrapped-volume-race-1920b0c9-e890-11ea-b58c-0242ac11000b pods took: 800.202268ms
STEP: Creating RC which spawns configmap-volume pods
Aug 27 18:09:31.378: INFO: Pod name wrapped-volume-race-742ee29b-e890-11ea-b58c-0242ac11000b: Found 0 pods out of 5
Aug 27 18:09:36.384: INFO: Pod name wrapped-volume-race-742ee29b-e890-11ea-b58c-0242ac11000b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-742ee29b-e890-11ea-b58c-0242ac11000b in namespace e2e-tests-emptydir-wrapper-4kj4b, will wait for the garbage collector to delete the pods
Aug 27 18:11:40.495: INFO: Deleting ReplicationController wrapped-volume-race-742ee29b-e890-11ea-b58c-0242ac11000b took: 34.843355ms
Aug 27 18:11:40.796: INFO: Terminating ReplicationController wrapped-volume-race-742ee29b-e890-11ea-b58c-0242ac11000b pods took: 300.259508ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:12:32.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-4kj4b" for this suite.
Aug 27 18:12:42.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:12:42.938: INFO: namespace: e2e-tests-emptydir-wrapper-4kj4b, resource: bindings, ignored listing per whitelist
Aug 27 18:12:42.985: INFO: namespace e2e-tests-emptydir-wrapper-4kj4b deletion completed in 10.2021501s

• [SLOW TEST:539.384 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:12:42.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 18:12:43.139: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Aug 27 18:12:43.144: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bsdzd/daemonsets","resourceVersion":"2694177"},"items":null}

Aug 27 18:12:43.146: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bsdzd/pods","resourceVersion":"2694177"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:12:43.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-bsdzd" for this suite.
Aug 27 18:12:49.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:12:49.275: INFO: namespace: e2e-tests-daemonsets-bsdzd, resource: bindings, ignored listing per whitelist
Aug 27 18:12:49.281: INFO: namespace e2e-tests-daemonsets-bsdzd deletion completed in 6.125726632s

S [SKIPPING] [6.296 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Aug 27 18:12:43.139: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:12:49.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 18:12:49.395: INFO: Creating ReplicaSet my-hostname-basic-ea69bbc0-e890-11ea-b58c-0242ac11000b
Aug 27 18:12:49.417: INFO: Pod name my-hostname-basic-ea69bbc0-e890-11ea-b58c-0242ac11000b: Found 0 pods out of 1
Aug 27 18:12:54.422: INFO: Pod name my-hostname-basic-ea69bbc0-e890-11ea-b58c-0242ac11000b: Found 1 pods out of 1
Aug 27 18:12:54.422: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ea69bbc0-e890-11ea-b58c-0242ac11000b" is running
Aug 27 18:12:54.425: INFO: Pod "my-hostname-basic-ea69bbc0-e890-11ea-b58c-0242ac11000b-zs9s2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 18:12:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 18:12:53 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 18:12:53 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 18:12:49 +0000 UTC Reason: Message:}])
Aug 27 18:12:54.425: INFO: Trying to dial the pod
Aug 27 18:12:59.433: INFO: Controller my-hostname-basic-ea69bbc0-e890-11ea-b58c-0242ac11000b: Got expected result from replica 1 [my-hostname-basic-ea69bbc0-e890-11ea-b58c-0242ac11000b-zs9s2]: "my-hostname-basic-ea69bbc0-e890-11ea-b58c-0242ac11000b-zs9s2", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:12:59.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-4z9bm" for this suite.
Aug 27 18:13:05.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:13:05.507: INFO: namespace: e2e-tests-replicaset-4z9bm, resource: bindings, ignored listing per whitelist
Aug 27 18:13:05.559: INFO: namespace e2e-tests-replicaset-4z9bm deletion completed in 6.122750897s

• [SLOW TEST:16.278 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:13:05.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 27 18:13:05.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6m784'
Aug 27 18:13:09.549: INFO: stderr: ""
Aug 27 18:13:09.549: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Aug 27 18:13:09.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6m784'
Aug 27 18:13:18.294: INFO: stderr: ""
Aug 27 18:13:18.294: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:13:18.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6m784" for this suite.
Aug 27 18:13:24.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:13:24.717: INFO: namespace: e2e-tests-kubectl-6m784, resource: bindings, ignored listing per whitelist
Aug 27 18:13:24.722: INFO: namespace e2e-tests-kubectl-6m784 deletion completed in 6.399015586s

• [SLOW TEST:19.162 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:13:24.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 27 18:13:32.432: INFO: 5 pods remaining
Aug 27 18:13:32.432: INFO: 0 pods has nil DeletionTimestamp
Aug 27 18:13:32.432: INFO: 
Aug 27 18:13:34.669: INFO: 0 pods remaining
Aug 27 18:13:34.669: INFO: 0 pods has nil DeletionTimestamp
Aug 27 18:13:34.669: INFO: 
Aug 27 18:13:36.614: INFO: 0 pods remaining
Aug 27 18:13:36.614: INFO: 0 pods has nil DeletionTimestamp
Aug 27 18:13:36.614: INFO: 
STEP: Gathering metrics
W0827 18:13:38.087117       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 18:13:38.087: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:13:38.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-wq8cw" for this suite.
Aug 27 18:13:46.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:13:46.396: INFO: namespace: e2e-tests-gc-wq8cw, resource: bindings, ignored listing per whitelist
Aug 27 18:13:46.423: INFO: namespace e2e-tests-gc-wq8cw deletion completed in 8.333090857s

• [SLOW TEST:21.702 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:13:46.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 18:13:46.500: INFO: Creating deployment "test-recreate-deployment"
Aug 27 18:13:46.519: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 27 18:13:46.538: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Aug 27 18:13:48.551: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 27 18:13:48.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734148826, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734148826, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734148826, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734148826, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 18:13:50.599: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 27 18:13:50.739: INFO: Updating deployment test-recreate-deployment
Aug 27 18:13:50.739: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Aug 27 18:13:51.604: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-9lb7q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9lb7q/deployments/test-recreate-deployment,UID:0c736672-e891-11ea-a485-0242ac120004,ResourceVersion:2694556,Generation:2,CreationTimestamp:2020-08-27 18:13:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-27 18:13:51 +0000 UTC 2020-08-27 18:13:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-27 18:13:51 +0000 UTC 2020-08-27 18:13:46 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Aug 27 18:13:51.608: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-9lb7q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9lb7q/replicasets/test-recreate-deployment-589c4bfd,UID:0f2026fe-e891-11ea-a485-0242ac120004,ResourceVersion:2694553,Generation:1,CreationTimestamp:2020-08-27 18:13:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0c736672-e891-11ea-a485-0242ac120004 0xc0024599af 0xc0024599c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 18:13:51.609: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 27 18:13:51.609: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-9lb7q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9lb7q/replicasets/test-recreate-deployment-5bf7f65dc,UID:0c78ee94-e891-11ea-a485-0242ac120004,ResourceVersion:2694544,Generation:2,CreationTimestamp:2020-08-27 18:13:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0c736672-e891-11ea-a485-0242ac120004 0xc002459ac0 0xc002459ac1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 18:13:51.655: INFO: Pod "test-recreate-deployment-589c4bfd-6gkx2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-6gkx2,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-9lb7q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9lb7q/pods/test-recreate-deployment-589c4bfd-6gkx2,UID:0f260ddb-e891-11ea-a485-0242ac120004,ResourceVersion:2694557,Generation:0,CreationTimestamp:2020-08-27 18:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 0f2026fe-e891-11ea-a485-0242ac120004 0xc0021cd25f 0xc0021cd270}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pp9qs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pp9qs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pp9qs true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021cd2e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021cd300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 18:13:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 18:13:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 18:13:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 18:13:51 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-27 18:13:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:13:51.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-9lb7q" for this suite.
Aug 27 18:13:59.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:13:59.809: INFO: namespace: e2e-tests-deployment-9lb7q, resource: bindings, ignored listing per whitelist
Aug 27 18:13:59.854: INFO: namespace e2e-tests-deployment-9lb7q deletion completed in 8.078428087s

• [SLOW TEST:13.431 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:13:59.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-14736174-e891-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume configMaps
Aug 27 18:14:00.005: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1474356c-e891-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-45nw6" to be "success or failure"
Aug 27 18:14:00.023: INFO: Pod "pod-projected-configmaps-1474356c-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.387331ms
Aug 27 18:14:02.027: INFO: Pod "pod-projected-configmaps-1474356c-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022016849s
Aug 27 18:14:04.031: INFO: Pod "pod-projected-configmaps-1474356c-e891-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.026076563s
Aug 27 18:14:06.034: INFO: Pod "pod-projected-configmaps-1474356c-e891-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029032881s
STEP: Saw pod success
Aug 27 18:14:06.034: INFO: Pod "pod-projected-configmaps-1474356c-e891-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:14:06.037: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-1474356c-e891-11ea-b58c-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 18:14:06.081: INFO: Waiting for pod pod-projected-configmaps-1474356c-e891-11ea-b58c-0242ac11000b to disappear
Aug 27 18:14:06.098: INFO: Pod pod-projected-configmaps-1474356c-e891-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:14:06.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-45nw6" for this suite.
Aug 27 18:14:12.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:14:12.142: INFO: namespace: e2e-tests-projected-45nw6, resource: bindings, ignored listing per whitelist
Aug 27 18:14:12.187: INFO: namespace e2e-tests-projected-45nw6 deletion completed in 6.086031536s

• [SLOW TEST:12.333 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:14:12.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-1bd3ec9f-e891-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume configMaps
Aug 27 18:14:12.322: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1bd63818-e891-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-kt5cm" to be "success or failure"
Aug 27 18:14:12.410: INFO: Pod "pod-projected-configmaps-1bd63818-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 87.450061ms
Aug 27 18:14:14.433: INFO: Pod "pod-projected-configmaps-1bd63818-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110865234s
Aug 27 18:14:16.436: INFO: Pod "pod-projected-configmaps-1bd63818-e891-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114129319s
STEP: Saw pod success
Aug 27 18:14:16.436: INFO: Pod "pod-projected-configmaps-1bd63818-e891-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:14:16.438: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-1bd63818-e891-11ea-b58c-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 18:14:16.478: INFO: Waiting for pod pod-projected-configmaps-1bd63818-e891-11ea-b58c-0242ac11000b to disappear
Aug 27 18:14:16.496: INFO: Pod pod-projected-configmaps-1bd63818-e891-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:14:16.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kt5cm" for this suite.
Aug 27 18:14:22.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:14:22.609: INFO: namespace: e2e-tests-projected-kt5cm, resource: bindings, ignored listing per whitelist
Aug 27 18:14:22.623: INFO: namespace e2e-tests-projected-kt5cm deletion completed in 6.121929489s

• [SLOW TEST:10.436 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:14:22.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:14:30.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-snj68" for this suite.
Aug 27 18:14:39.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:14:39.021: INFO: namespace: e2e-tests-kubelet-test-snj68, resource: bindings, ignored listing per whitelist
Aug 27 18:14:39.082: INFO: namespace e2e-tests-kubelet-test-snj68 deletion completed in 8.118604527s

• [SLOW TEST:16.459 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:14:39.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-8smp
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 18:14:40.543: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8smp" in namespace "e2e-tests-subpath-vk54h" to be "success or failure"
Aug 27 18:14:40.575: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Pending", Reason="", readiness=false. Elapsed: 31.887932ms
Aug 27 18:14:42.579: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035120694s
Aug 27 18:14:44.582: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038664688s
Aug 27 18:14:46.685: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141630861s
Aug 27 18:14:48.690: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Running", Reason="", readiness=false. Elapsed: 8.145983102s
Aug 27 18:14:50.693: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Running", Reason="", readiness=false. Elapsed: 10.149041472s
Aug 27 18:14:52.697: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Running", Reason="", readiness=false. Elapsed: 12.153390653s
Aug 27 18:14:54.701: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Running", Reason="", readiness=false. Elapsed: 14.157888596s
Aug 27 18:14:56.706: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Running", Reason="", readiness=false. Elapsed: 16.162188146s
Aug 27 18:14:58.709: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Running", Reason="", readiness=false. Elapsed: 18.165934864s
Aug 27 18:15:00.712: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Running", Reason="", readiness=false. Elapsed: 20.168891759s
Aug 27 18:15:02.715: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Running", Reason="", readiness=false. Elapsed: 22.171842454s
Aug 27 18:15:04.719: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Running", Reason="", readiness=false. Elapsed: 24.175637969s
Aug 27 18:15:06.810: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Running", Reason="", readiness=false. Elapsed: 26.266753351s
Aug 27 18:15:08.814: INFO: Pod "pod-subpath-test-secret-8smp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.27082775s
STEP: Saw pod success
Aug 27 18:15:08.814: INFO: Pod "pod-subpath-test-secret-8smp" satisfied condition "success or failure"
Aug 27 18:15:08.817: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-8smp container test-container-subpath-secret-8smp: 
STEP: delete the pod
Aug 27 18:15:08.902: INFO: Waiting for pod pod-subpath-test-secret-8smp to disappear
Aug 27 18:15:08.914: INFO: Pod pod-subpath-test-secret-8smp no longer exists
STEP: Deleting pod pod-subpath-test-secret-8smp
Aug 27 18:15:08.914: INFO: Deleting pod "pod-subpath-test-secret-8smp" in namespace "e2e-tests-subpath-vk54h"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:15:08.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-vk54h" for this suite.
Aug 27 18:15:15.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:15:15.339: INFO: namespace: e2e-tests-subpath-vk54h, resource: bindings, ignored listing per whitelist
Aug 27 18:15:15.342: INFO: namespace e2e-tests-subpath-vk54h deletion completed in 6.420732516s

• [SLOW TEST:36.259 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:15:15.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 27 18:15:15.629: INFO: Waiting up to 5m0s for pod "pod-4190e6c9-e891-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-z486q" to be "success or failure"
Aug 27 18:15:15.757: INFO: Pod "pod-4190e6c9-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 127.970908ms
Aug 27 18:15:17.762: INFO: Pod "pod-4190e6c9-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132731271s
Aug 27 18:15:19.768: INFO: Pod "pod-4190e6c9-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138893211s
Aug 27 18:15:21.852: INFO: Pod "pod-4190e6c9-e891-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.223125205s
STEP: Saw pod success
Aug 27 18:15:21.852: INFO: Pod "pod-4190e6c9-e891-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:15:21.855: INFO: Trying to get logs from node hunter-worker pod pod-4190e6c9-e891-11ea-b58c-0242ac11000b container test-container: 
STEP: delete the pod
Aug 27 18:15:21.899: INFO: Waiting for pod pod-4190e6c9-e891-11ea-b58c-0242ac11000b to disappear
Aug 27 18:15:21.943: INFO: Pod pod-4190e6c9-e891-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:15:21.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-z486q" for this suite.
Aug 27 18:15:30.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:15:30.313: INFO: namespace: e2e-tests-emptydir-z486q, resource: bindings, ignored listing per whitelist
Aug 27 18:15:30.359: INFO: namespace e2e-tests-emptydir-z486q deletion completed in 8.315027733s

• [SLOW TEST:15.017 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:15:30.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 18:15:30.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4aadc63b-e891-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-c6jh9" to be "success or failure"
Aug 27 18:15:31.030: INFO: Pod "downwardapi-volume-4aadc63b-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 105.426224ms
Aug 27 18:15:33.034: INFO: Pod "downwardapi-volume-4aadc63b-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10926376s
Aug 27 18:15:35.127: INFO: Pod "downwardapi-volume-4aadc63b-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202793286s
Aug 27 18:15:37.130: INFO: Pod "downwardapi-volume-4aadc63b-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205152904s
Aug 27 18:15:39.518: INFO: Pod "downwardapi-volume-4aadc63b-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593165505s
Aug 27 18:15:42.086: INFO: Pod "downwardapi-volume-4aadc63b-e891-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.161584722s
STEP: Saw pod success
Aug 27 18:15:42.086: INFO: Pod "downwardapi-volume-4aadc63b-e891-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:15:42.090: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4aadc63b-e891-11ea-b58c-0242ac11000b container client-container: 
STEP: delete the pod
Aug 27 18:15:42.673: INFO: Waiting for pod downwardapi-volume-4aadc63b-e891-11ea-b58c-0242ac11000b to disappear
Aug 27 18:15:42.741: INFO: Pod downwardapi-volume-4aadc63b-e891-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:15:42.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c6jh9" for this suite.
Aug 27 18:15:49.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:15:49.132: INFO: namespace: e2e-tests-projected-c6jh9, resource: bindings, ignored listing per whitelist
Aug 27 18:15:49.187: INFO: namespace e2e-tests-projected-c6jh9 deletion completed in 6.34483956s

• [SLOW TEST:18.828 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:15:49.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Aug 27 18:15:49.323: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:15:56.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-wjz69" for this suite.
Aug 27 18:16:05.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:16:05.059: INFO: namespace: e2e-tests-init-container-wjz69, resource: bindings, ignored listing per whitelist
Aug 27 18:16:05.265: INFO: namespace e2e-tests-init-container-wjz69 deletion completed in 8.256955799s

• [SLOW TEST:16.078 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:16:05.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-5f3f5c14-e891-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 18:16:05.457: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5f416afd-e891-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-kdt5w" to be "success or failure"
Aug 27 18:16:05.589: INFO: Pod "pod-projected-secrets-5f416afd-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 132.296996ms
Aug 27 18:16:07.593: INFO: Pod "pod-projected-secrets-5f416afd-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136123742s
Aug 27 18:16:09.597: INFO: Pod "pod-projected-secrets-5f416afd-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13992792s
Aug 27 18:16:11.744: INFO: Pod "pod-projected-secrets-5f416afd-e891-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.287430074s
STEP: Saw pod success
Aug 27 18:16:11.744: INFO: Pod "pod-projected-secrets-5f416afd-e891-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:16:12.086: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-5f416afd-e891-11ea-b58c-0242ac11000b container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 18:16:12.670: INFO: Waiting for pod pod-projected-secrets-5f416afd-e891-11ea-b58c-0242ac11000b to disappear
Aug 27 18:16:12.948: INFO: Pod pod-projected-secrets-5f416afd-e891-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:16:12.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kdt5w" for this suite.
Aug 27 18:16:21.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:16:21.211: INFO: namespace: e2e-tests-projected-kdt5w, resource: bindings, ignored listing per whitelist
Aug 27 18:16:21.223: INFO: namespace e2e-tests-projected-kdt5w deletion completed in 8.270583719s

• [SLOW TEST:15.958 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:16:21.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Aug 27 18:16:21.320: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Aug 27 18:16:21.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2mj5n'
Aug 27 18:16:21.807: INFO: stderr: ""
Aug 27 18:16:21.807: INFO: stdout: "service/redis-slave created\n"
Aug 27 18:16:21.807: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Aug 27 18:16:21.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2mj5n'
Aug 27 18:16:22.141: INFO: stderr: ""
Aug 27 18:16:22.141: INFO: stdout: "service/redis-master created\n"
Aug 27 18:16:22.141: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 27 18:16:22.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2mj5n'
Aug 27 18:16:22.455: INFO: stderr: ""
Aug 27 18:16:22.455: INFO: stdout: "service/frontend created\n"
Aug 27 18:16:22.456: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Aug 27 18:16:22.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2mj5n'
Aug 27 18:16:22.769: INFO: stderr: ""
Aug 27 18:16:22.769: INFO: stdout: "deployment.extensions/frontend created\n"
Aug 27 18:16:22.769: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 27 18:16:22.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2mj5n'
Aug 27 18:16:23.070: INFO: stderr: ""
Aug 27 18:16:23.070: INFO: stdout: "deployment.extensions/redis-master created\n"
Aug 27 18:16:23.070: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Aug 27 18:16:23.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2mj5n'
Aug 27 18:16:23.437: INFO: stderr: ""
Aug 27 18:16:23.437: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Aug 27 18:16:23.437: INFO: Waiting for all frontend pods to be Running.
Aug 27 18:16:33.488: INFO: Waiting for frontend to serve content.
Aug 27 18:16:33.502: INFO: Trying to add a new entry to the guestbook.
Aug 27 18:16:34.547: INFO: Failed to get response from guestbook. err: , response: 
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-master:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-mas...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Strea in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Aug 27 18:16:39.568: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 27 18:16:39.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2mj5n' Aug 27 18:16:39.745: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 18:16:39.745: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Aug 27 18:16:39.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2mj5n' Aug 27 18:16:39.944: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 18:16:39.944: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 27 18:16:39.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2mj5n' Aug 27 18:16:40.115: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 18:16:40.115: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 27 18:16:40.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2mj5n' Aug 27 18:16:40.224: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 18:16:40.224: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 27 18:16:40.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2mj5n' Aug 27 18:16:40.335: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 18:16:40.335: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 27 18:16:40.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2mj5n' Aug 27 18:16:41.101: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 18:16:41.101: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:16:41.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2mj5n" for this suite. Aug 27 18:17:24.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:17:24.064: INFO: namespace: e2e-tests-kubectl-2mj5n, resource: bindings, ignored listing per whitelist Aug 27 18:17:24.109: INFO: namespace e2e-tests-kubectl-2mj5n deletion completed in 42.945033055s • [SLOW TEST:62.885 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:17:24.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 27 18:17:26.059: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-p2gpc,SelfLink:/api/v1/namespaces/e2e-tests-watch-p2gpc/configmaps/e2e-watch-test-resource-version,UID:8ef8ab82-e891-11ea-a485-0242ac120004,ResourceVersion:2695403,Generation:0,CreationTimestamp:2020-08-27 18:17:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 27 18:17:26.059: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-p2gpc,SelfLink:/api/v1/namespaces/e2e-tests-watch-p2gpc/configmaps/e2e-watch-test-resource-version,UID:8ef8ab82-e891-11ea-a485-0242ac120004,ResourceVersion:2695404,Generation:0,CreationTimestamp:2020-08-27 18:17:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:17:26.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-p2gpc" for this suite. Aug 27 18:17:34.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:17:34.727: INFO: namespace: e2e-tests-watch-p2gpc, resource: bindings, ignored listing per whitelist Aug 27 18:17:34.763: INFO: namespace e2e-tests-watch-p2gpc deletion completed in 8.485138302s • [SLOW TEST:10.654 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:17:34.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Aug 27 18:17:35.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fshww' Aug 27 18:17:35.471: INFO: stderr: "" Aug 27 18:17:35.471: INFO: stdout: "pod/pause created\n" Aug 27 18:17:35.471: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 27 18:17:35.471: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-fshww" to be "running and ready" Aug 27 18:17:35.494: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 23.544939ms Aug 27 18:17:37.505: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033644798s Aug 27 18:17:39.734: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.262846669s Aug 27 18:17:42.283: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.81240383s Aug 27 18:17:44.287: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.816413708s Aug 27 18:17:44.287: INFO: Pod "pause" satisfied condition "running and ready" Aug 27 18:17:44.287: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Aug 27 18:17:44.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-fshww' Aug 27 18:17:44.823: INFO: stderr: "" Aug 27 18:17:44.823: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 27 18:17:44.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-fshww' Aug 27 18:17:45.092: INFO: stderr: "" Aug 27 18:17:45.092: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 27 18:17:45.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-fshww' Aug 27 18:17:45.314: INFO: stderr: "" Aug 27 18:17:45.314: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 27 18:17:45.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-fshww' Aug 27 18:17:45.402: INFO: stderr: "" Aug 27 18:17:45.402: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Aug 27 18:17:45.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fshww' Aug 27 18:17:45.717: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 18:17:45.717: INFO: stdout: "pod \"pause\" force deleted\n" Aug 27 18:17:45.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-fshww' Aug 27 18:17:45.824: INFO: stderr: "No resources found.\n" Aug 27 18:17:45.824: INFO: stdout: "" Aug 27 18:17:45.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-fshww -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 27 18:17:45.928: INFO: stderr: "" Aug 27 18:17:45.929: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:17:45.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fshww" for this suite. Aug 27 18:17:54.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:17:54.195: INFO: namespace: e2e-tests-kubectl-fshww, resource: bindings, ignored listing per whitelist Aug 27 18:17:54.244: INFO: namespace e2e-tests-kubectl-fshww deletion completed in 8.313001076s • [SLOW TEST:19.481 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:17:54.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 18:17:54.371: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0308fbe-e891-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-hwgf7" to be "success or failure" Aug 27 18:17:54.464: INFO: Pod "downwardapi-volume-a0308fbe-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 92.506864ms Aug 27 18:17:56.469: INFO: Pod "downwardapi-volume-a0308fbe-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097600709s Aug 27 18:17:58.473: INFO: Pod "downwardapi-volume-a0308fbe-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10170945s Aug 27 18:18:00.478: INFO: Pod "downwardapi-volume-a0308fbe-e891-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.106132819s STEP: Saw pod success Aug 27 18:18:00.478: INFO: Pod "downwardapi-volume-a0308fbe-e891-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 18:18:00.481: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a0308fbe-e891-11ea-b58c-0242ac11000b container client-container: STEP: delete the pod Aug 27 18:18:00.503: INFO: Waiting for pod downwardapi-volume-a0308fbe-e891-11ea-b58c-0242ac11000b to disappear Aug 27 18:18:00.508: INFO: Pod downwardapi-volume-a0308fbe-e891-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:18:00.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hwgf7" for this suite. Aug 27 18:18:06.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:18:06.618: INFO: namespace: e2e-tests-downward-api-hwgf7, resource: bindings, ignored listing per whitelist Aug 27 18:18:06.653: INFO: namespace e2e-tests-downward-api-hwgf7 deletion completed in 6.142311403s • [SLOW TEST:12.408 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:18:06.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0827 18:18:47.882724 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 27 18:18:47.882: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:18:47.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-f2tgj" for this suite. Aug 27 18:19:04.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:19:04.295: INFO: namespace: e2e-tests-gc-f2tgj, resource: bindings, ignored listing per whitelist Aug 27 18:19:04.331: INFO: namespace e2e-tests-gc-f2tgj deletion completed in 16.444678079s • [SLOW TEST:57.678 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:19:04.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-ca5becc4-e891-11ea-b58c-0242ac11000b STEP: Creating secret with name s-test-opt-upd-ca5bed27-e891-11ea-b58c-0242ac11000b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ca5becc4-e891-11ea-b58c-0242ac11000b STEP: Updating secret s-test-opt-upd-ca5bed27-e891-11ea-b58c-0242ac11000b STEP: Creating secret with name s-test-opt-create-ca5bed49-e891-11ea-b58c-0242ac11000b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:19:21.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nrrpm" for this suite. Aug 27 18:20:03.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:20:03.419: INFO: namespace: e2e-tests-secrets-nrrpm, resource: bindings, ignored listing per whitelist Aug 27 18:20:03.457: INFO: namespace e2e-tests-secrets-nrrpm deletion completed in 42.154856049s • [SLOW TEST:59.126 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:20:03.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 27 18:20:03.655: INFO: Waiting up to 5m0s for pod "pod-ed3a32ac-e891-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-j7qct" to be "success or failure" Aug 27 18:20:03.671: INFO: Pod "pod-ed3a32ac-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.719133ms Aug 27 18:20:05.675: INFO: Pod "pod-ed3a32ac-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020078961s Aug 27 18:20:07.679: INFO: Pod "pod-ed3a32ac-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023693324s Aug 27 18:20:09.683: INFO: Pod "pod-ed3a32ac-e891-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028084952s STEP: Saw pod success Aug 27 18:20:09.683: INFO: Pod "pod-ed3a32ac-e891-11ea-b58c-0242ac11000b" satisfied condition "success or failure" Aug 27 18:20:09.686: INFO: Trying to get logs from node hunter-worker2 pod pod-ed3a32ac-e891-11ea-b58c-0242ac11000b container test-container: STEP: delete the pod Aug 27 18:20:09.709: INFO: Waiting for pod pod-ed3a32ac-e891-11ea-b58c-0242ac11000b to disappear Aug 27 18:20:09.731: INFO: Pod pod-ed3a32ac-e891-11ea-b58c-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 18:20:09.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-j7qct" for this suite. Aug 27 18:20:18.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 18:20:18.216: INFO: namespace: e2e-tests-emptydir-j7qct, resource: bindings, ignored listing per whitelist Aug 27 18:20:18.389: INFO: namespace e2e-tests-emptydir-j7qct deletion completed in 8.654841261s • [SLOW TEST:14.931 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 18:20:18.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 18:20:18.746: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 18:20:25.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa1841fd-e891-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-kd26j" to be "success or failure"
Aug 27 18:20:25.247: INFO: Pod "downwardapi-volume-fa1841fd-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.7893ms
Aug 27 18:20:27.251: INFO: Pod "downwardapi-volume-fa1841fd-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031945358s
Aug 27 18:20:29.295: INFO: Pod "downwardapi-volume-fa1841fd-e891-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07578628s
Aug 27 18:20:31.355: INFO: Pod "downwardapi-volume-fa1841fd-e891-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.1359886s
STEP: Saw pod success
Aug 27 18:20:31.356: INFO: Pod "downwardapi-volume-fa1841fd-e891-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:20:31.359: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-fa1841fd-e891-11ea-b58c-0242ac11000b container client-container: 
STEP: delete the pod
Aug 27 18:20:31.422: INFO: Waiting for pod downwardapi-volume-fa1841fd-e891-11ea-b58c-0242ac11000b to disappear
Aug 27 18:20:31.439: INFO: Pod downwardapi-volume-fa1841fd-e891-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:20:31.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kd26j" for this suite.
Aug 27 18:20:37.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:20:37.591: INFO: namespace: e2e-tests-downward-api-kd26j, resource: bindings, ignored listing per whitelist
Aug 27 18:20:37.615: INFO: namespace e2e-tests-downward-api-kd26j deletion completed in 6.171760165s

• [SLOW TEST:12.577 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:20:37.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0197d0ad-e892-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 18:20:37.807: INFO: Waiting up to 5m0s for pod "pod-secrets-01987b90-e892-11ea-b58c-0242ac11000b" in namespace "e2e-tests-secrets-jl2lt" to be "success or failure"
Aug 27 18:20:37.822: INFO: Pod "pod-secrets-01987b90-e892-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.280533ms
Aug 27 18:20:39.865: INFO: Pod "pod-secrets-01987b90-e892-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058121755s
Aug 27 18:20:41.869: INFO: Pod "pod-secrets-01987b90-e892-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.062377838s
Aug 27 18:20:43.873: INFO: Pod "pod-secrets-01987b90-e892-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066729139s
STEP: Saw pod success
Aug 27 18:20:43.873: INFO: Pod "pod-secrets-01987b90-e892-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:20:43.877: INFO: Trying to get logs from node hunter-worker pod pod-secrets-01987b90-e892-11ea-b58c-0242ac11000b container secret-env-test: 
STEP: delete the pod
Aug 27 18:20:43.902: INFO: Waiting for pod pod-secrets-01987b90-e892-11ea-b58c-0242ac11000b to disappear
Aug 27 18:20:43.924: INFO: Pod pod-secrets-01987b90-e892-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:20:43.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-jl2lt" for this suite.
Aug 27 18:20:54.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:20:54.278: INFO: namespace: e2e-tests-secrets-jl2lt, resource: bindings, ignored listing per whitelist
Aug 27 18:20:54.281: INFO: namespace e2e-tests-secrets-jl2lt deletion completed in 10.353069892s

• [SLOW TEST:16.665 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:20:54.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 18:20:55.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-6wzqt" to be "success or failure"
Aug 27 18:20:56.230: INFO: Pod "downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 260.440356ms
Aug 27 18:20:58.234: INFO: Pod "downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264465769s
Aug 27 18:21:00.238: INFO: Pod "downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268515478s
Aug 27 18:21:02.242: INFO: Pod "downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.272313647s
Aug 27 18:21:04.499: INFO: Pod "downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52933708s
Aug 27 18:21:06.502: INFO: Pod "downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 10.532598202s
Aug 27 18:21:08.506: INFO: Pod "downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.536245384s
STEP: Saw pod success
Aug 27 18:21:08.506: INFO: Pod "downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:21:08.508: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b container client-container: 
STEP: delete the pod
Aug 27 18:21:08.628: INFO: Waiting for pod downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b to disappear
Aug 27 18:21:08.830: INFO: Pod downwardapi-volume-0c11cc66-e892-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:21:08.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6wzqt" for this suite.
Aug 27 18:21:20.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:21:20.927: INFO: namespace: e2e-tests-projected-6wzqt, resource: bindings, ignored listing per whitelist
Aug 27 18:21:20.929: INFO: namespace e2e-tests-projected-6wzqt deletion completed in 12.091056769s

• [SLOW TEST:26.648 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:21:20.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-4bxgl
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 27 18:21:22.661: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 27 18:21:58.142: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.38:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-4bxgl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:21:58.142: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:21:58.169729       6 log.go:172] (0xc001cd20b0) (0xc00037f4a0) Create stream
I0827 18:21:58.169757       6 log.go:172] (0xc001cd20b0) (0xc00037f4a0) Stream added, broadcasting: 1
I0827 18:21:58.171568       6 log.go:172] (0xc001cd20b0) Reply frame received for 1
I0827 18:21:58.171616       6 log.go:172] (0xc001cd20b0) (0xc001c2a5a0) Create stream
I0827 18:21:58.171631       6 log.go:172] (0xc001cd20b0) (0xc001c2a5a0) Stream added, broadcasting: 3
I0827 18:21:58.172386       6 log.go:172] (0xc001cd20b0) Reply frame received for 3
I0827 18:21:58.172419       6 log.go:172] (0xc001cd20b0) (0xc00037f680) Create stream
I0827 18:21:58.172432       6 log.go:172] (0xc001cd20b0) (0xc00037f680) Stream added, broadcasting: 5
I0827 18:21:58.173265       6 log.go:172] (0xc001cd20b0) Reply frame received for 5
I0827 18:21:58.255631       6 log.go:172] (0xc001cd20b0) Data frame received for 3
I0827 18:21:58.255664       6 log.go:172] (0xc001c2a5a0) (3) Data frame handling
I0827 18:21:58.255678       6 log.go:172] (0xc001c2a5a0) (3) Data frame sent
I0827 18:21:58.255690       6 log.go:172] (0xc001cd20b0) Data frame received for 3
I0827 18:21:58.255703       6 log.go:172] (0xc001c2a5a0) (3) Data frame handling
I0827 18:21:58.255740       6 log.go:172] (0xc001cd20b0) Data frame received for 5
I0827 18:21:58.255756       6 log.go:172] (0xc00037f680) (5) Data frame handling
I0827 18:21:58.257069       6 log.go:172] (0xc001cd20b0) Data frame received for 1
I0827 18:21:58.257089       6 log.go:172] (0xc00037f4a0) (1) Data frame handling
I0827 18:21:58.257104       6 log.go:172] (0xc00037f4a0) (1) Data frame sent
I0827 18:21:58.257125       6 log.go:172] (0xc001cd20b0) (0xc00037f4a0) Stream removed, broadcasting: 1
I0827 18:21:58.257144       6 log.go:172] (0xc001cd20b0) Go away received
I0827 18:21:58.257235       6 log.go:172] (0xc001cd20b0) (0xc00037f4a0) Stream removed, broadcasting: 1
I0827 18:21:58.257249       6 log.go:172] (0xc001cd20b0) (0xc001c2a5a0) Stream removed, broadcasting: 3
I0827 18:21:58.257257       6 log.go:172] (0xc001cd20b0) (0xc00037f680) Stream removed, broadcasting: 5
Aug 27 18:21:58.257: INFO: Found all expected endpoints: [netserver-0]
Aug 27 18:21:58.277: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.186:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-4bxgl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:21:58.277: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:21:58.303364       6 log.go:172] (0xc000d9f810) (0xc001895720) Create stream
I0827 18:21:58.303392       6 log.go:172] (0xc000d9f810) (0xc001895720) Stream added, broadcasting: 1
I0827 18:21:58.305648       6 log.go:172] (0xc000d9f810) Reply frame received for 1
I0827 18:21:58.305685       6 log.go:172] (0xc000d9f810) (0xc00070ab40) Create stream
I0827 18:21:58.305697       6 log.go:172] (0xc000d9f810) (0xc00070ab40) Stream added, broadcasting: 3
I0827 18:21:58.306741       6 log.go:172] (0xc000d9f810) Reply frame received for 3
I0827 18:21:58.306773       6 log.go:172] (0xc000d9f810) (0xc00202f9a0) Create stream
I0827 18:21:58.306785       6 log.go:172] (0xc000d9f810) (0xc00202f9a0) Stream added, broadcasting: 5
I0827 18:21:58.307688       6 log.go:172] (0xc000d9f810) Reply frame received for 5
I0827 18:21:58.371256       6 log.go:172] (0xc000d9f810) Data frame received for 3
I0827 18:21:58.371296       6 log.go:172] (0xc00070ab40) (3) Data frame handling
I0827 18:21:58.371321       6 log.go:172] (0xc00070ab40) (3) Data frame sent
I0827 18:21:58.371406       6 log.go:172] (0xc000d9f810) Data frame received for 5
I0827 18:21:58.371425       6 log.go:172] (0xc000d9f810) Data frame received for 3
I0827 18:21:58.371455       6 log.go:172] (0xc00070ab40) (3) Data frame handling
I0827 18:21:58.371472       6 log.go:172] (0xc00202f9a0) (5) Data frame handling
I0827 18:21:58.373007       6 log.go:172] (0xc000d9f810) Data frame received for 1
I0827 18:21:58.373027       6 log.go:172] (0xc001895720) (1) Data frame handling
I0827 18:21:58.373045       6 log.go:172] (0xc001895720) (1) Data frame sent
I0827 18:21:58.373062       6 log.go:172] (0xc000d9f810) (0xc001895720) Stream removed, broadcasting: 1
I0827 18:21:58.373073       6 log.go:172] (0xc000d9f810) Go away received
I0827 18:21:58.373175       6 log.go:172] (0xc000d9f810) (0xc001895720) Stream removed, broadcasting: 1
I0827 18:21:58.373190       6 log.go:172] (0xc000d9f810) (0xc00070ab40) Stream removed, broadcasting: 3
I0827 18:21:58.373197       6 log.go:172] (0xc000d9f810) (0xc00202f9a0) Stream removed, broadcasting: 5
Aug 27 18:21:58.373: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:21:58.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-4bxgl" for this suite.
Aug 27 18:22:24.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:22:24.617: INFO: namespace: e2e-tests-pod-network-test-4bxgl, resource: bindings, ignored listing per whitelist
Aug 27 18:22:24.663: INFO: namespace e2e-tests-pod-network-test-4bxgl deletion completed in 26.28579861s

• [SLOW TEST:63.733 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:22:24.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jnsw5
Aug 27 18:22:31.038: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jnsw5
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 18:22:31.041: INFO: Initial restart count of pod liveness-http is 0
Aug 27 18:22:56.907: INFO: Restart count of pod e2e-tests-container-probe-jnsw5/liveness-http is now 1 (25.865775329s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:22:57.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-jnsw5" for this suite.
Aug 27 18:23:08.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:23:08.178: INFO: namespace: e2e-tests-container-probe-jnsw5, resource: bindings, ignored listing per whitelist
Aug 27 18:23:08.227: INFO: namespace e2e-tests-container-probe-jnsw5 deletion completed in 10.801821252s

• [SLOW TEST:43.564 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:23:08.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 18:23:09.276: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 27 18:23:14.401: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 27 18:23:14.401: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Aug 27 18:23:14.907: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-hqx9h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hqx9h/deployments/test-cleanup-deployment,UID:5ef2c9bc-e892-11ea-a485-0242ac120004,ResourceVersion:2696537,Generation:1,CreationTimestamp:2020-08-27 18:23:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Aug 27 18:23:14.939: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Aug 27 18:23:14.940: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Aug 27 18:23:14.940: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-hqx9h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hqx9h/replicasets/test-cleanup-controller,UID:5bb2054d-e892-11ea-a485-0242ac120004,ResourceVersion:2696538,Generation:1,CreationTimestamp:2020-08-27 18:23:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 5ef2c9bc-e892-11ea-a485-0242ac120004 0xc0010d1697 0xc0010d1698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 27 18:23:15.455: INFO: Pod "test-cleanup-controller-2sbcl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-2sbcl,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-hqx9h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hqx9h/pods/test-cleanup-controller-2sbcl,UID:5be4fa95-e892-11ea-a485-0242ac120004,ResourceVersion:2696532,Generation:0,CreationTimestamp:2020-08-27 18:23:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 5bb2054d-e892-11ea-a485-0242ac120004 0xc0012d0757 0xc0012d0758}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnsw9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnsw9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-fnsw9 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012d08a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012d08c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 18:23:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 18:23:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 18:23:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 18:23:09 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.188,StartTime:2020-08-27 18:23:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 18:23:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4853c351e1f3fe770bf218a898a7f9b328fdf1b3a1ba4d59a0168bf0d4a48cf2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:23:15.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-hqx9h" for this suite.
Aug 27 18:23:26.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:23:26.219: INFO: namespace: e2e-tests-deployment-hqx9h, resource: bindings, ignored listing per whitelist
Aug 27 18:23:26.257: INFO: namespace e2e-tests-deployment-hqx9h deletion completed in 10.395460008s

• [SLOW TEST:18.030 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:23:26.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Aug 27 18:23:27.883: INFO: Waiting up to 5m0s for pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz" in namespace "e2e-tests-svcaccounts-dgncr" to be "success or failure"
Aug 27 18:23:27.970: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz": Phase="Pending", Reason="", readiness=false. Elapsed: 87.626053ms
Aug 27 18:23:30.093: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209907183s
Aug 27 18:23:32.386: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.503794941s
Aug 27 18:23:34.389: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.506357529s
Aug 27 18:23:37.196: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.31320048s
Aug 27 18:23:39.326: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz": Phase="Pending", Reason="", readiness=false. Elapsed: 11.443097645s
Aug 27 18:23:41.330: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz": Phase="Pending", Reason="", readiness=false. Elapsed: 13.44756824s
Aug 27 18:23:43.334: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz": Phase="Pending", Reason="", readiness=false. Elapsed: 15.451405618s
Aug 27 18:23:45.338: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.455709178s
STEP: Saw pod success
Aug 27 18:23:45.338: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz" satisfied condition "success or failure"
Aug 27 18:23:45.341: INFO: Trying to get logs from node hunter-worker pod pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz container token-test: 
STEP: delete the pod
Aug 27 18:23:45.700: INFO: Waiting for pod pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz to disappear
Aug 27 18:23:45.793: INFO: Pod pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-fl6kz no longer exists
STEP: Creating a pod to test consume service account root CA
Aug 27 18:23:45.797: INFO: Waiting up to 5m0s for pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd" in namespace "e2e-tests-svcaccounts-dgncr" to be "success or failure"
Aug 27 18:23:45.972: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd": Phase="Pending", Reason="", readiness=false. Elapsed: 175.181894ms
Aug 27 18:23:47.991: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19391557s
Aug 27 18:23:49.994: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197153769s
Aug 27 18:23:52.014: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216856037s
Aug 27 18:23:54.146: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.349171502s
Aug 27 18:23:56.150: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.353032361s
Aug 27 18:23:58.799: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd": Phase="Running", Reason="", readiness=false. Elapsed: 13.001900695s
Aug 27 18:24:00.803: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.005850209s
STEP: Saw pod success
Aug 27 18:24:00.803: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd" satisfied condition "success or failure"
Aug 27 18:24:00.806: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd container root-ca-test: 
STEP: delete the pod
Aug 27 18:24:01.222: INFO: Waiting for pod pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd to disappear
Aug 27 18:24:02.309: INFO: Pod pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-86wqd no longer exists
STEP: Creating a pod to test consume service account namespace
Aug 27 18:24:02.579: INFO: Waiting up to 5m0s for pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln" in namespace "e2e-tests-svcaccounts-dgncr" to be "success or failure"
Aug 27 18:24:03.368: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln": Phase="Pending", Reason="", readiness=false. Elapsed: 789.242731ms
Aug 27 18:24:05.744: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln": Phase="Pending", Reason="", readiness=false. Elapsed: 3.164909233s
Aug 27 18:24:07.757: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln": Phase="Pending", Reason="", readiness=false. Elapsed: 5.178189766s
Aug 27 18:24:09.761: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln": Phase="Pending", Reason="", readiness=false. Elapsed: 7.182182018s
Aug 27 18:24:11.765: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln": Phase="Pending", Reason="", readiness=false. Elapsed: 9.185706129s
Aug 27 18:24:13.768: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln": Phase="Running", Reason="", readiness=false. Elapsed: 11.189224906s
Aug 27 18:24:15.773: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.193710518s
STEP: Saw pod success
Aug 27 18:24:15.773: INFO: Pod "pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln" satisfied condition "success or failure"
Aug 27 18:24:15.776: INFO: Trying to get logs from node hunter-worker pod pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln container namespace-test: 
STEP: delete the pod
Aug 27 18:24:15.805: INFO: Waiting for pod pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln to disappear
Aug 27 18:24:15.827: INFO: Pod pod-service-account-66fa33f4-e892-11ea-b58c-0242ac11000b-prsln no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:24:15.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-dgncr" for this suite.
Aug 27 18:24:21.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:24:21.930: INFO: namespace: e2e-tests-svcaccounts-dgncr, resource: bindings, ignored listing per whitelist
Aug 27 18:24:21.941: INFO: namespace e2e-tests-svcaccounts-dgncr deletion completed in 6.109548303s

• [SLOW TEST:55.684 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:24:21.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Aug 27 18:24:26.732: INFO: Successfully updated pod "labelsupdate87428c7f-e892-11ea-b58c-0242ac11000b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:24:28.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v8l9s" for this suite.
Aug 27 18:24:57.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:24:57.553: INFO: namespace: e2e-tests-projected-v8l9s, resource: bindings, ignored listing per whitelist
Aug 27 18:24:57.578: INFO: namespace e2e-tests-projected-v8l9s deletion completed in 28.346904898s

• [SLOW TEST:35.637 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:24:57.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-z978w
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Aug 27 18:24:57.942: INFO: Found 0 stateful pods, waiting for 3
Aug 27 18:25:08.006: INFO: Found 1 stateful pods, waiting for 3
Aug 27 18:25:17.949: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:25:17.949: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:25:17.949: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 27 18:25:28.368: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:25:28.368: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:25:28.368: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 27 18:25:29.246: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 27 18:25:40.546: INFO: Updating stateful set ss2
Aug 27 18:25:40.833: INFO: Waiting for Pod e2e-tests-statefulset-z978w/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Aug 27 18:25:52.580: INFO: Found 2 stateful pods, waiting for 3
Aug 27 18:26:02.610: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:26:02.610: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:26:02.610: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 27 18:26:12.621: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:26:12.621: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:26:12.621: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Aug 27 18:26:22.585: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:26:22.585: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:26:22.585: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 27 18:26:22.607: INFO: Updating stateful set ss2
Aug 27 18:26:22.812: INFO: Waiting for Pod e2e-tests-statefulset-z978w/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 27 18:26:32.839: INFO: Updating stateful set ss2
Aug 27 18:26:32.855: INFO: Waiting for StatefulSet e2e-tests-statefulset-z978w/ss2 to complete update
Aug 27 18:26:32.855: INFO: Waiting for Pod e2e-tests-statefulset-z978w/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 27 18:26:42.862: INFO: Waiting for StatefulSet e2e-tests-statefulset-z978w/ss2 to complete update
Aug 27 18:26:42.862: INFO: Waiting for Pod e2e-tests-statefulset-z978w/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug 27 18:26:52.862: INFO: Deleting all statefulset in ns e2e-tests-statefulset-z978w
Aug 27 18:26:52.864: INFO: Scaling statefulset ss2 to 0
Aug 27 18:27:23.014: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 18:27:23.017: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:27:23.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-z978w" for this suite.
Aug 27 18:27:31.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:27:33.089: INFO: namespace: e2e-tests-statefulset-z978w, resource: bindings, ignored listing per whitelist
Aug 27 18:27:33.098: INFO: namespace e2e-tests-statefulset-z978w deletion completed in 9.756876465s

• [SLOW TEST:155.520 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:27:33.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-dd64
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 18:27:33.698: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dd64" in namespace "e2e-tests-subpath-gl465" to be "success or failure"
Aug 27 18:27:34.101: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Pending", Reason="", readiness=false. Elapsed: 402.664478ms
Aug 27 18:27:36.104: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406206255s
Aug 27 18:27:38.108: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.409473169s
Aug 27 18:27:40.250: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.55136814s
Aug 27 18:27:42.406: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.707645306s
Aug 27 18:27:44.410: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Pending", Reason="", readiness=false. Elapsed: 10.711363979s
Aug 27 18:27:46.556: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Pending", Reason="", readiness=false. Elapsed: 12.857311909s
Aug 27 18:27:48.560: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Pending", Reason="", readiness=false. Elapsed: 14.861609968s
Aug 27 18:27:50.564: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Pending", Reason="", readiness=false. Elapsed: 16.865860069s
Aug 27 18:27:52.569: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Running", Reason="", readiness=false. Elapsed: 18.870402798s
Aug 27 18:27:54.777: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Running", Reason="", readiness=false. Elapsed: 21.078269434s
Aug 27 18:27:56.780: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Running", Reason="", readiness=false. Elapsed: 23.081615368s
Aug 27 18:27:58.784: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Running", Reason="", readiness=false. Elapsed: 25.085399002s
Aug 27 18:28:00.788: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Running", Reason="", readiness=false. Elapsed: 27.089525357s
Aug 27 18:28:02.792: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Running", Reason="", readiness=false. Elapsed: 29.0936294s
Aug 27 18:28:04.796: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Running", Reason="", readiness=false. Elapsed: 31.097912811s
Aug 27 18:28:06.800: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Running", Reason="", readiness=false. Elapsed: 33.101563729s
Aug 27 18:28:08.804: INFO: Pod "pod-subpath-test-projected-dd64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.105288248s
STEP: Saw pod success
Aug 27 18:28:08.804: INFO: Pod "pod-subpath-test-projected-dd64" satisfied condition "success or failure"
Aug 27 18:28:08.807: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-dd64 container test-container-subpath-projected-dd64: 
STEP: delete the pod
Aug 27 18:28:08.843: INFO: Waiting for pod pod-subpath-test-projected-dd64 to disappear
Aug 27 18:28:08.857: INFO: Pod pod-subpath-test-projected-dd64 no longer exists
STEP: Deleting pod pod-subpath-test-projected-dd64
Aug 27 18:28:08.857: INFO: Deleting pod "pod-subpath-test-projected-dd64" in namespace "e2e-tests-subpath-gl465"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:28:08.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-gl465" for this suite.
Aug 27 18:28:14.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:28:15.004: INFO: namespace: e2e-tests-subpath-gl465, resource: bindings, ignored listing per whitelist
Aug 27 18:28:15.009: INFO: namespace e2e-tests-subpath-gl465 deletion completed in 6.147274095s

• [SLOW TEST:41.911 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:28:15.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 27 18:28:23.483: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 18:28:23.510: INFO: Pod pod-with-prestop-http-hook still exists
Aug 27 18:28:25.510: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 18:28:25.514: INFO: Pod pod-with-prestop-http-hook still exists
Aug 27 18:28:27.510: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 18:28:27.513: INFO: Pod pod-with-prestop-http-hook still exists
Aug 27 18:28:29.510: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 18:28:29.515: INFO: Pod pod-with-prestop-http-hook still exists
Aug 27 18:28:31.510: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 18:28:31.514: INFO: Pod pod-with-prestop-http-hook still exists
Aug 27 18:28:33.510: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 18:28:33.513: INFO: Pod pod-with-prestop-http-hook still exists
Aug 27 18:28:35.510: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 18:28:35.513: INFO: Pod pod-with-prestop-http-hook still exists
Aug 27 18:28:37.510: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 18:28:37.639: INFO: Pod pod-with-prestop-http-hook still exists
Aug 27 18:28:39.510: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 18:28:39.597: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:28:39.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-hbdhd" for this suite.
Aug 27 18:29:03.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:29:03.685: INFO: namespace: e2e-tests-container-lifecycle-hook-hbdhd, resource: bindings, ignored listing per whitelist
Aug 27 18:29:03.705: INFO: namespace e2e-tests-container-lifecycle-hook-hbdhd deletion completed in 24.085489644s

• [SLOW TEST:48.696 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:29:03.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Aug 27 18:29:03.965: INFO: Waiting up to 5m0s for pod "client-containers-2f46940a-e893-11ea-b58c-0242ac11000b" in namespace "e2e-tests-containers-dvvs7" to be "success or failure"
Aug 27 18:29:03.973: INFO: Pod "client-containers-2f46940a-e893-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.890983ms
Aug 27 18:29:05.977: INFO: Pod "client-containers-2f46940a-e893-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012242295s
Aug 27 18:29:07.981: INFO: Pod "client-containers-2f46940a-e893-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.016300081s
Aug 27 18:29:09.986: INFO: Pod "client-containers-2f46940a-e893-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0209922s
STEP: Saw pod success
Aug 27 18:29:09.986: INFO: Pod "client-containers-2f46940a-e893-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:29:09.990: INFO: Trying to get logs from node hunter-worker2 pod client-containers-2f46940a-e893-11ea-b58c-0242ac11000b container test-container: 
STEP: delete the pod
Aug 27 18:29:10.032: INFO: Waiting for pod client-containers-2f46940a-e893-11ea-b58c-0242ac11000b to disappear
Aug 27 18:29:10.045: INFO: Pod client-containers-2f46940a-e893-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:29:10.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-dvvs7" for this suite.
Aug 27 18:29:16.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:29:16.131: INFO: namespace: e2e-tests-containers-dvvs7, resource: bindings, ignored listing per whitelist
Aug 27 18:29:16.162: INFO: namespace e2e-tests-containers-dvvs7 deletion completed in 6.114510787s

• [SLOW TEST:12.456 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:29:16.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-lp7c4
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 27 18:29:16.299: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 27 18:29:48.682: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.200:8080/dial?request=hostName&protocol=http&host=10.244.2.47&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-lp7c4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:29:48.682: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:29:48.716303       6 log.go:172] (0xc0010d8000) (0xc001c2bf40) Create stream
I0827 18:29:48.716327       6 log.go:172] (0xc0010d8000) (0xc001c2bf40) Stream added, broadcasting: 1
I0827 18:29:48.718432       6 log.go:172] (0xc0010d8000) Reply frame received for 1
I0827 18:29:48.718480       6 log.go:172] (0xc0010d8000) (0xc001e3b180) Create stream
I0827 18:29:48.718495       6 log.go:172] (0xc0010d8000) (0xc001e3b180) Stream added, broadcasting: 3
I0827 18:29:48.719772       6 log.go:172] (0xc0010d8000) Reply frame received for 3
I0827 18:29:48.719839       6 log.go:172] (0xc0010d8000) (0xc000b59900) Create stream
I0827 18:29:48.719868       6 log.go:172] (0xc0010d8000) (0xc000b59900) Stream added, broadcasting: 5
I0827 18:29:48.721031       6 log.go:172] (0xc0010d8000) Reply frame received for 5
I0827 18:29:48.792937       6 log.go:172] (0xc0010d8000) Data frame received for 3
I0827 18:29:48.792974       6 log.go:172] (0xc001e3b180) (3) Data frame handling
I0827 18:29:48.793008       6 log.go:172] (0xc001e3b180) (3) Data frame sent
I0827 18:29:48.793550       6 log.go:172] (0xc0010d8000) Data frame received for 5
I0827 18:29:48.793572       6 log.go:172] (0xc000b59900) (5) Data frame handling
I0827 18:29:48.793750       6 log.go:172] (0xc0010d8000) Data frame received for 3
I0827 18:29:48.793762       6 log.go:172] (0xc001e3b180) (3) Data frame handling
I0827 18:29:48.795396       6 log.go:172] (0xc0010d8000) Data frame received for 1
I0827 18:29:48.795413       6 log.go:172] (0xc001c2bf40) (1) Data frame handling
I0827 18:29:48.795436       6 log.go:172] (0xc001c2bf40) (1) Data frame sent
I0827 18:29:48.795465       6 log.go:172] (0xc0010d8000) (0xc001c2bf40) Stream removed, broadcasting: 1
I0827 18:29:48.795533       6 log.go:172] (0xc0010d8000) Go away received
I0827 18:29:48.795627       6 log.go:172] (0xc0010d8000) (0xc001c2bf40) Stream removed, broadcasting: 1
I0827 18:29:48.795661       6 log.go:172] (0xc0010d8000) (0xc001e3b180) Stream removed, broadcasting: 3
I0827 18:29:48.795674       6 log.go:172] (0xc0010d8000) (0xc000b59900) Stream removed, broadcasting: 5
Aug 27 18:29:48.795: INFO: Waiting for endpoints: map[]
Aug 27 18:29:48.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.200:8080/dial?request=hostName&protocol=http&host=10.244.1.199&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-lp7c4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:29:48.798: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:29:48.821817       6 log.go:172] (0xc0010d84d0) (0xc001aaa280) Create stream
I0827 18:29:48.821849       6 log.go:172] (0xc0010d84d0) (0xc001aaa280) Stream added, broadcasting: 1
I0827 18:29:48.823664       6 log.go:172] (0xc0010d84d0) Reply frame received for 1
I0827 18:29:48.823713       6 log.go:172] (0xc0010d84d0) (0xc000b59b80) Create stream
I0827 18:29:48.823729       6 log.go:172] (0xc0010d84d0) (0xc000b59b80) Stream added, broadcasting: 3
I0827 18:29:48.825054       6 log.go:172] (0xc0010d84d0) Reply frame received for 3
I0827 18:29:48.825097       6 log.go:172] (0xc0010d84d0) (0xc000b59c20) Create stream
I0827 18:29:48.825112       6 log.go:172] (0xc0010d84d0) (0xc000b59c20) Stream added, broadcasting: 5
I0827 18:29:48.826213       6 log.go:172] (0xc0010d84d0) Reply frame received for 5
I0827 18:29:48.884358       6 log.go:172] (0xc0010d84d0) Data frame received for 3
I0827 18:29:48.884385       6 log.go:172] (0xc000b59b80) (3) Data frame handling
I0827 18:29:48.884399       6 log.go:172] (0xc000b59b80) (3) Data frame sent
I0827 18:29:48.884639       6 log.go:172] (0xc0010d84d0) Data frame received for 3
I0827 18:29:48.884658       6 log.go:172] (0xc000b59b80) (3) Data frame handling
I0827 18:29:48.884675       6 log.go:172] (0xc0010d84d0) Data frame received for 5
I0827 18:29:48.884683       6 log.go:172] (0xc000b59c20) (5) Data frame handling
I0827 18:29:48.886047       6 log.go:172] (0xc0010d84d0) Data frame received for 1
I0827 18:29:48.886078       6 log.go:172] (0xc001aaa280) (1) Data frame handling
I0827 18:29:48.886095       6 log.go:172] (0xc001aaa280) (1) Data frame sent
I0827 18:29:48.886114       6 log.go:172] (0xc0010d84d0) (0xc001aaa280) Stream removed, broadcasting: 1
I0827 18:29:48.886134       6 log.go:172] (0xc0010d84d0) Go away received
I0827 18:29:48.886235       6 log.go:172] (0xc0010d84d0) (0xc001aaa280) Stream removed, broadcasting: 1
I0827 18:29:48.886261       6 log.go:172] (0xc0010d84d0) (0xc000b59b80) Stream removed, broadcasting: 3
I0827 18:29:48.886267       6 log.go:172] (0xc0010d84d0) (0xc000b59c20) Stream removed, broadcasting: 5
Aug 27 18:29:48.886: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:29:48.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-lp7c4" for this suite.
Aug 27 18:30:16.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:30:17.019: INFO: namespace: e2e-tests-pod-network-test-lp7c4, resource: bindings, ignored listing per whitelist
Aug 27 18:30:17.036: INFO: namespace e2e-tests-pod-network-test-lp7c4 deletion completed in 28.146341962s

• [SLOW TEST:60.874 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:30:17.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-5b102011-e893-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 18:30:17.459: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5b165932-e893-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-vs6mc" to be "success or failure"
Aug 27 18:30:17.533: INFO: Pod "pod-projected-secrets-5b165932-e893-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 73.720894ms
Aug 27 18:30:19.537: INFO: Pod "pod-projected-secrets-5b165932-e893-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077539646s
Aug 27 18:30:22.137: INFO: Pod "pod-projected-secrets-5b165932-e893-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.677892286s
Aug 27 18:30:24.879: INFO: Pod "pod-projected-secrets-5b165932-e893-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 7.419739343s
Aug 27 18:30:27.269: INFO: Pod "pod-projected-secrets-5b165932-e893-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.809310794s
STEP: Saw pod success
Aug 27 18:30:27.269: INFO: Pod "pod-projected-secrets-5b165932-e893-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:30:27.271: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-5b165932-e893-11ea-b58c-0242ac11000b container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 18:30:28.235: INFO: Waiting for pod pod-projected-secrets-5b165932-e893-11ea-b58c-0242ac11000b to disappear
Aug 27 18:30:28.820: INFO: Pod pod-projected-secrets-5b165932-e893-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:30:28.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vs6mc" for this suite.
Aug 27 18:30:34.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:30:35.039: INFO: namespace: e2e-tests-projected-vs6mc, resource: bindings, ignored listing per whitelist
Aug 27 18:30:35.052: INFO: namespace e2e-tests-projected-vs6mc deletion completed in 6.227752063s

• [SLOW TEST:18.015 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:30:35.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-95gp8
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-95gp8
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-95gp8
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-95gp8
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-95gp8
Aug 27 18:30:41.330: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-95gp8, name: ss-0, uid: 687bd48b-e893-11ea-a485-0242ac120004, status phase: Pending. Waiting for statefulset controller to delete.
Aug 27 18:30:48.083: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-95gp8, name: ss-0, uid: 687bd48b-e893-11ea-a485-0242ac120004, status phase: Failed. Waiting for statefulset controller to delete.
Aug 27 18:30:48.873: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-95gp8, name: ss-0, uid: 687bd48b-e893-11ea-a485-0242ac120004, status phase: Failed. Waiting for statefulset controller to delete.
Aug 27 18:30:49.180: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-95gp8
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-95gp8
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-95gp8 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug 27 18:30:56.006: INFO: Deleting all statefulset in ns e2e-tests-statefulset-95gp8
Aug 27 18:30:56.009: INFO: Scaling statefulset ss to 0
Aug 27 18:31:16.193: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 18:31:16.196: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:31:16.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-95gp8" for this suite.
Aug 27 18:31:22.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:31:22.346: INFO: namespace: e2e-tests-statefulset-95gp8, resource: bindings, ignored listing per whitelist
Aug 27 18:31:22.374: INFO: namespace e2e-tests-statefulset-95gp8 deletion completed in 6.123955495s

• [SLOW TEST:47.322 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:31:22.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 27 18:31:22.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-dhzxv'
Aug 27 18:31:24.990: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 18:31:24.990: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Aug 27 18:31:25.064: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-qq9rt]
Aug 27 18:31:25.065: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-qq9rt" in namespace "e2e-tests-kubectl-dhzxv" to be "running and ready"
Aug 27 18:31:25.107: INFO: Pod "e2e-test-nginx-rc-qq9rt": Phase="Pending", Reason="", readiness=false. Elapsed: 42.093356ms
Aug 27 18:31:27.724: INFO: Pod "e2e-test-nginx-rc-qq9rt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.659104464s
Aug 27 18:31:29.727: INFO: Pod "e2e-test-nginx-rc-qq9rt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.662901319s
Aug 27 18:31:31.731: INFO: Pod "e2e-test-nginx-rc-qq9rt": Phase="Running", Reason="", readiness=true. Elapsed: 6.666652501s
Aug 27 18:31:31.731: INFO: Pod "e2e-test-nginx-rc-qq9rt" satisfied condition "running and ready"
Aug 27 18:31:31.731: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-qq9rt]
Aug 27 18:31:31.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dhzxv'
Aug 27 18:31:31.888: INFO: stderr: ""
Aug 27 18:31:31.888: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Aug 27 18:31:31.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dhzxv'
Aug 27 18:31:32.001: INFO: stderr: ""
Aug 27 18:31:32.001: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:31:32.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dhzxv" for this suite.
Aug 27 18:31:56.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:31:56.063: INFO: namespace: e2e-tests-kubectl-dhzxv, resource: bindings, ignored listing per whitelist
Aug 27 18:31:56.080: INFO: namespace e2e-tests-kubectl-dhzxv deletion completed in 24.07463485s

• [SLOW TEST:33.706 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:31:56.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 18:31:56.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96057746-e893-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-trgjc" to be "success or failure"
Aug 27 18:31:56.533: INFO: Pod "downwardapi-volume-96057746-e893-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 221.151681ms
Aug 27 18:31:58.814: INFO: Pod "downwardapi-volume-96057746-e893-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502211079s
Aug 27 18:32:00.818: INFO: Pod "downwardapi-volume-96057746-e893-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.506218618s
Aug 27 18:32:02.822: INFO: Pod "downwardapi-volume-96057746-e893-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 6.510554658s
Aug 27 18:32:04.826: INFO: Pod "downwardapi-volume-96057746-e893-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.513936956s
STEP: Saw pod success
Aug 27 18:32:04.826: INFO: Pod "downwardapi-volume-96057746-e893-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:32:04.828: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-96057746-e893-11ea-b58c-0242ac11000b container client-container: 
STEP: delete the pod
Aug 27 18:32:04.868: INFO: Waiting for pod downwardapi-volume-96057746-e893-11ea-b58c-0242ac11000b to disappear
Aug 27 18:32:04.879: INFO: Pod downwardapi-volume-96057746-e893-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:32:04.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-trgjc" for this suite.
Aug 27 18:32:13.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:32:13.773: INFO: namespace: e2e-tests-projected-trgjc, resource: bindings, ignored listing per whitelist
Aug 27 18:32:13.811: INFO: namespace e2e-tests-projected-trgjc deletion completed in 8.927299971s

• [SLOW TEST:17.731 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:32:13.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 18:32:14.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Aug 27 18:32:14.364: INFO: stderr: ""
Aug 27 18:32:14.364: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-08-23T03:53:49Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Aug 27 18:32:14.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xb7tk'
Aug 27 18:32:14.656: INFO: stderr: ""
Aug 27 18:32:14.656: INFO: stdout: "replicationcontroller/redis-master created\n"
Aug 27 18:32:14.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xb7tk'
Aug 27 18:32:15.144: INFO: stderr: ""
Aug 27 18:32:15.144: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 27 18:32:16.293: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:32:16.293: INFO: Found 0 / 1
Aug 27 18:32:17.177: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:32:17.177: INFO: Found 0 / 1
Aug 27 18:32:18.149: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:32:18.149: INFO: Found 0 / 1
Aug 27 18:32:19.149: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:32:19.149: INFO: Found 0 / 1
Aug 27 18:32:20.203: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:32:20.203: INFO: Found 0 / 1
Aug 27 18:32:21.617: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:32:21.617: INFO: Found 1 / 1
Aug 27 18:32:21.617: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 27 18:32:21.619: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:32:21.619: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 27 18:32:21.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-qqj7b --namespace=e2e-tests-kubectl-xb7tk'
Aug 27 18:32:22.048: INFO: stderr: ""
Aug 27 18:32:22.048: INFO: stdout: "Name:               redis-master-qqj7b\nNamespace:          e2e-tests-kubectl-xb7tk\nPriority:           0\nPriorityClassName:  \nNode:               hunter-worker/172.18.0.2\nStart Time:         Thu, 27 Aug 2020 18:32:14 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        \nStatus:             Running\nIP:                 10.244.1.205\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://257fa37736a81e16b9294b91ea7ce491e512d6f867bbdb094437b875955ca141\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 27 Aug 2020 18:32:19 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xkxkg (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-xkxkg:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-xkxkg\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                    Message\n  ----    ------     ----  ----                    -------\n  Normal  Scheduled  8s    default-scheduler       Successfully assigned e2e-tests-kubectl-xb7tk/redis-master-qqj7b to hunter-worker\n  Normal  Pulled     6s    kubelet, hunter-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, hunter-worker  Created container\n  Normal  Started    2s    kubelet, hunter-worker  Started container\n"
Aug 27 18:32:22.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-xb7tk'
Aug 27 18:32:22.177: INFO: stderr: ""
Aug 27 18:32:22.177: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-xb7tk\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-qqj7b\n"
Aug 27 18:32:22.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-xb7tk'
Aug 27 18:32:22.317: INFO: stderr: ""
Aug 27 18:32:22.317: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-xb7tk\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.99.63.64\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.205:6379\nSession Affinity:  None\nEvents:            \n"
Aug 27 18:32:22.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane'
Aug 27 18:32:22.436: INFO: stderr: ""
Aug 27 18:32:22.436: INFO: stdout: "Name:               hunter-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=hunter-control-plane\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:32:36 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Thu, 27 Aug 2020 18:32:22 +0000   Sat, 15 Aug 2020 09:32:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Thu, 27 Aug 2020 18:32:22 +0000   Sat, 15 Aug 2020 09:32:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Thu, 27 Aug 2020 18:32:22 +0000   Sat, 15 Aug 2020 09:32:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Thu, 27 Aug 2020 18:32:22 +0000   Sat, 15 Aug 2020 09:33:27 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.4\n  Hostname:    hunter-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 403efd4ae68744eab619e7055020cc3f\n System UUID:                dafd70bf-eb1f-4422-b415-7379320414ca\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 19.10\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.3.3-14-g449e9269\n Kubelet Version:            v1.13.12\n Kube-Proxy Version:         v1.13.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-54ff9cd656-7rfjf                        100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     12d\n  kube-system                coredns-54ff9cd656-n4q2v                        100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     12d\n  kube-system                etcd-hunter-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12d\n  kube-system                kindnet-kjrwt                                   100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      12d\n  kube-system                kube-apiserver-hunter-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         12d\n  kube-system                kube-controller-manager-hunter-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         12d\n  kube-system                kube-proxy-5tp66                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12d\n  kube-system                kube-scheduler-hunter-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         12d\n  local-path-storage         local-path-provisioner-674595c7-srvmc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 27 18:32:22.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-xb7tk'
Aug 27 18:32:22.546: INFO: stderr: ""
Aug 27 18:32:22.546: INFO: stdout: "Name:         e2e-tests-kubectl-xb7tk\nLabels:       e2e-framework=kubectl\n              e2e-run=dbdc887e-e889-11ea-b58c-0242ac11000b\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:32:22.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xb7tk" for this suite.
Aug 27 18:32:46.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:32:46.846: INFO: namespace: e2e-tests-kubectl-xb7tk, resource: bindings, ignored listing per whitelist
Aug 27 18:32:46.889: INFO: namespace e2e-tests-kubectl-xb7tk deletion completed in 24.339137494s

• [SLOW TEST:33.078 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:32:46.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-b44f18e8-e893-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume configMaps
Aug 27 18:32:47.368: INFO: Waiting up to 5m0s for pod "pod-configmaps-b4518d9b-e893-11ea-b58c-0242ac11000b" in namespace "e2e-tests-configmap-7ncgq" to be "success or failure"
Aug 27 18:32:47.387: INFO: Pod "pod-configmaps-b4518d9b-e893-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.365693ms
Aug 27 18:32:49.503: INFO: Pod "pod-configmaps-b4518d9b-e893-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135224354s
Aug 27 18:32:51.507: INFO: Pod "pod-configmaps-b4518d9b-e893-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138964948s
Aug 27 18:32:53.510: INFO: Pod "pod-configmaps-b4518d9b-e893-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.142761692s
STEP: Saw pod success
Aug 27 18:32:53.510: INFO: Pod "pod-configmaps-b4518d9b-e893-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:32:53.513: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-b4518d9b-e893-11ea-b58c-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Aug 27 18:32:53.623: INFO: Waiting for pod pod-configmaps-b4518d9b-e893-11ea-b58c-0242ac11000b to disappear
Aug 27 18:32:53.641: INFO: Pod pod-configmaps-b4518d9b-e893-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:32:53.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7ncgq" for this suite.
Aug 27 18:32:59.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:32:59.709: INFO: namespace: e2e-tests-configmap-7ncgq, resource: bindings, ignored listing per whitelist
Aug 27 18:32:59.746: INFO: namespace e2e-tests-configmap-7ncgq deletion completed in 6.101978299s

• [SLOW TEST:12.856 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:32:59.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Aug 27 18:32:59.891: INFO: namespace e2e-tests-kubectl-mh2q2
Aug 27 18:32:59.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mh2q2'
Aug 27 18:33:00.169: INFO: stderr: ""
Aug 27 18:33:00.169: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 27 18:33:01.173: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:33:01.174: INFO: Found 0 / 1
Aug 27 18:33:02.173: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:33:02.173: INFO: Found 0 / 1
Aug 27 18:33:03.172: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:33:03.172: INFO: Found 1 / 1
Aug 27 18:33:03.173: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 27 18:33:03.175: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:33:03.175: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 27 18:33:03.175: INFO: wait on redis-master startup in e2e-tests-kubectl-mh2q2 
Aug 27 18:33:03.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s2f47 redis-master --namespace=e2e-tests-kubectl-mh2q2'
Aug 27 18:33:03.302: INFO: stderr: ""
Aug 27 18:33:03.302: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 27 Aug 18:33:03.044 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Aug 18:33:03.044 # Server started, Redis version 3.2.12\n1:M 27 Aug 18:33:03.044 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Aug 18:33:03.044 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Aug 27 18:33:03.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-mh2q2'
Aug 27 18:33:03.467: INFO: stderr: ""
Aug 27 18:33:03.467: INFO: stdout: "service/rm2 exposed\n"
Aug 27 18:33:03.557: INFO: Service rm2 in namespace e2e-tests-kubectl-mh2q2 found.
STEP: exposing service
Aug 27 18:33:05.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-mh2q2'
Aug 27 18:33:05.736: INFO: stderr: ""
Aug 27 18:33:05.736: INFO: stdout: "service/rm3 exposed\n"
Aug 27 18:33:05.774: INFO: Service rm3 in namespace e2e-tests-kubectl-mh2q2 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:33:07.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mh2q2" for this suite.
Aug 27 18:33:30.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:33:30.265: INFO: namespace: e2e-tests-kubectl-mh2q2, resource: bindings, ignored listing per whitelist
Aug 27 18:33:30.292: INFO: namespace e2e-tests-kubectl-mh2q2 deletion completed in 22.506529476s

• [SLOW TEST:30.546 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:33:30.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 27 18:33:35.359: INFO: Successfully updated pod "pod-update-ce5c7db3-e893-11ea-b58c-0242ac11000b"
STEP: verifying the updated pod is in kubernetes
Aug 27 18:33:35.370: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:33:35.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6pw5s" for this suite.
Aug 27 18:33:57.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:33:57.470: INFO: namespace: e2e-tests-pods-6pw5s, resource: bindings, ignored listing per whitelist
Aug 27 18:33:57.524: INFO: namespace e2e-tests-pods-6pw5s deletion completed in 22.150408527s

• [SLOW TEST:27.232 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:33:57.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Aug 27 18:34:02.189: INFO: Successfully updated pod "annotationupdatede59b6ca-e893-11ea-b58c-0242ac11000b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:34:04.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zkpvm" for this suite.
Aug 27 18:34:26.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:34:26.301: INFO: namespace: e2e-tests-projected-zkpvm, resource: bindings, ignored listing per whitelist
Aug 27 18:34:26.341: INFO: namespace e2e-tests-projected-zkpvm deletion completed in 22.103728022s

• [SLOW TEST:28.817 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:34:26.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-vjl2
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 18:34:26.458: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vjl2" in namespace "e2e-tests-subpath-dxjrg" to be "success or failure"
Aug 27 18:34:26.492: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Pending", Reason="", readiness=false. Elapsed: 34.597203ms
Aug 27 18:34:28.496: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038515563s
Aug 27 18:34:30.501: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043159827s
Aug 27 18:34:32.505: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046941804s
Aug 27 18:34:34.510: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Running", Reason="", readiness=false. Elapsed: 8.051777798s
Aug 27 18:34:36.514: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Running", Reason="", readiness=false. Elapsed: 10.056318776s
Aug 27 18:34:38.519: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Running", Reason="", readiness=false. Elapsed: 12.060723715s
Aug 27 18:34:40.523: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Running", Reason="", readiness=false. Elapsed: 14.064897904s
Aug 27 18:34:42.527: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Running", Reason="", readiness=false. Elapsed: 16.069329872s
Aug 27 18:34:44.531: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Running", Reason="", readiness=false. Elapsed: 18.072978542s
Aug 27 18:34:46.535: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Running", Reason="", readiness=false. Elapsed: 20.076659527s
Aug 27 18:34:48.539: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Running", Reason="", readiness=false. Elapsed: 22.080968182s
Aug 27 18:34:50.543: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Running", Reason="", readiness=false. Elapsed: 24.084700884s
Aug 27 18:34:52.547: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Running", Reason="", readiness=false. Elapsed: 26.088686874s
Aug 27 18:34:54.551: INFO: Pod "pod-subpath-test-configmap-vjl2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.093189987s
STEP: Saw pod success
Aug 27 18:34:54.551: INFO: Pod "pod-subpath-test-configmap-vjl2" satisfied condition "success or failure"
Aug 27 18:34:54.555: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-vjl2 container test-container-subpath-configmap-vjl2: 
STEP: delete the pod
Aug 27 18:34:54.595: INFO: Waiting for pod pod-subpath-test-configmap-vjl2 to disappear
Aug 27 18:34:54.606: INFO: Pod pod-subpath-test-configmap-vjl2 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vjl2
Aug 27 18:34:54.607: INFO: Deleting pod "pod-subpath-test-configmap-vjl2" in namespace "e2e-tests-subpath-dxjrg"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:34:54.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-dxjrg" for this suite.
Aug 27 18:35:00.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:35:00.706: INFO: namespace: e2e-tests-subpath-dxjrg, resource: bindings, ignored listing per whitelist
Aug 27 18:35:00.747: INFO: namespace e2e-tests-subpath-dxjrg deletion completed in 6.133447125s

• [SLOW TEST:34.406 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:35:00.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 27 18:35:00.842: INFO: Waiting up to 5m0s for pod "pod-04034026-e894-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-s5fct" to be "success or failure"
Aug 27 18:35:00.846: INFO: Pod "pod-04034026-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.739547ms
Aug 27 18:35:02.887: INFO: Pod "pod-04034026-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044662794s
Aug 27 18:35:04.897: INFO: Pod "pod-04034026-e894-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05455565s
STEP: Saw pod success
Aug 27 18:35:04.897: INFO: Pod "pod-04034026-e894-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:35:04.898: INFO: Trying to get logs from node hunter-worker pod pod-04034026-e894-11ea-b58c-0242ac11000b container test-container: 
STEP: delete the pod
Aug 27 18:35:04.921: INFO: Waiting for pod pod-04034026-e894-11ea-b58c-0242ac11000b to disappear
Aug 27 18:35:04.974: INFO: Pod pod-04034026-e894-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:35:04.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-s5fct" for this suite.
Aug 27 18:35:11.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:35:11.264: INFO: namespace: e2e-tests-emptydir-s5fct, resource: bindings, ignored listing per whitelist
Aug 27 18:35:11.281: INFO: namespace e2e-tests-emptydir-s5fct deletion completed in 6.303142776s

• [SLOW TEST:10.534 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:35:11.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 27 18:35:11.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-bxl9l'
Aug 27 18:35:11.494: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 18:35:11.494: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Aug 27 18:35:13.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-bxl9l'
Aug 27 18:35:13.711: INFO: stderr: ""
Aug 27 18:35:13.711: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:35:13.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bxl9l" for this suite.
Aug 27 18:35:19.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:35:19.858: INFO: namespace: e2e-tests-kubectl-bxl9l, resource: bindings, ignored listing per whitelist
Aug 27 18:35:19.897: INFO: namespace e2e-tests-kubectl-bxl9l deletion completed in 6.151910653s

• [SLOW TEST:8.617 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:35:19.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0f74e4d0-e894-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 18:35:20.073: INFO: Waiting up to 5m0s for pod "pod-secrets-0f75931b-e894-11ea-b58c-0242ac11000b" in namespace "e2e-tests-secrets-xzmxf" to be "success or failure"
Aug 27 18:35:20.105: INFO: Pod "pod-secrets-0f75931b-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.068404ms
Aug 27 18:35:22.109: INFO: Pod "pod-secrets-0f75931b-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0352072s
Aug 27 18:35:24.138: INFO: Pod "pod-secrets-0f75931b-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064075236s
Aug 27 18:35:26.141: INFO: Pod "pod-secrets-0f75931b-e894-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067501353s
STEP: Saw pod success
Aug 27 18:35:26.141: INFO: Pod "pod-secrets-0f75931b-e894-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:35:26.144: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-0f75931b-e894-11ea-b58c-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Aug 27 18:35:26.181: INFO: Waiting for pod pod-secrets-0f75931b-e894-11ea-b58c-0242ac11000b to disappear
Aug 27 18:35:26.184: INFO: Pod pod-secrets-0f75931b-e894-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:35:26.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xzmxf" for this suite.
Aug 27 18:35:32.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:35:32.445: INFO: namespace: e2e-tests-secrets-xzmxf, resource: bindings, ignored listing per whitelist
Aug 27 18:35:32.484: INFO: namespace e2e-tests-secrets-xzmxf deletion completed in 6.297512277s

• [SLOW TEST:12.586 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:35:32.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 18:35:33.222: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 27 18:35:33.244: INFO: Number of nodes with available pods: 0
Aug 27 18:35:33.244: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 27 18:35:33.293: INFO: Number of nodes with available pods: 0
Aug 27 18:35:33.293: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:34.297: INFO: Number of nodes with available pods: 0
Aug 27 18:35:34.297: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:35.312: INFO: Number of nodes with available pods: 0
Aug 27 18:35:35.313: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:36.297: INFO: Number of nodes with available pods: 0
Aug 27 18:35:36.297: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:37.534: INFO: Number of nodes with available pods: 1
Aug 27 18:35:37.534: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 27 18:35:37.966: INFO: Number of nodes with available pods: 1
Aug 27 18:35:37.966: INFO: Number of running nodes: 0, number of available pods: 1
Aug 27 18:35:38.969: INFO: Number of nodes with available pods: 0
Aug 27 18:35:38.969: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 27 18:35:39.249: INFO: Number of nodes with available pods: 0
Aug 27 18:35:39.249: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:40.252: INFO: Number of nodes with available pods: 0
Aug 27 18:35:40.252: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:41.253: INFO: Number of nodes with available pods: 0
Aug 27 18:35:41.253: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:42.355: INFO: Number of nodes with available pods: 0
Aug 27 18:35:42.355: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:43.253: INFO: Number of nodes with available pods: 0
Aug 27 18:35:43.253: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:44.276: INFO: Number of nodes with available pods: 0
Aug 27 18:35:44.276: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:45.254: INFO: Number of nodes with available pods: 0
Aug 27 18:35:45.254: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:46.253: INFO: Number of nodes with available pods: 0
Aug 27 18:35:46.253: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:47.254: INFO: Number of nodes with available pods: 0
Aug 27 18:35:47.254: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:48.263: INFO: Number of nodes with available pods: 0
Aug 27 18:35:48.263: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:49.253: INFO: Number of nodes with available pods: 0
Aug 27 18:35:49.253: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:50.253: INFO: Number of nodes with available pods: 0
Aug 27 18:35:50.253: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:51.540: INFO: Number of nodes with available pods: 0
Aug 27 18:35:51.540: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 18:35:52.252: INFO: Number of nodes with available pods: 1
Aug 27 18:35:52.252: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-gl64c, will wait for the garbage collector to delete the pods
Aug 27 18:35:52.315: INFO: Deleting DaemonSet.extensions daemon-set took: 5.535011ms
Aug 27 18:35:52.415: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.223269ms
Aug 27 18:35:58.124: INFO: Number of nodes with available pods: 0
Aug 27 18:35:58.124: INFO: Number of running nodes: 0, number of available pods: 0
Aug 27 18:35:58.126: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gl64c/daemonsets","resourceVersion":"2699096"},"items":null}

Aug 27 18:35:58.162: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gl64c/pods","resourceVersion":"2699097"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:35:58.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-gl64c" for this suite.
Aug 27 18:36:04.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:36:04.282: INFO: namespace: e2e-tests-daemonsets-gl64c, resource: bindings, ignored listing per whitelist
Aug 27 18:36:04.302: INFO: namespace e2e-tests-daemonsets-gl64c deletion completed in 6.097605401s

• [SLOW TEST:31.817 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:36:04.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-fq9h6
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-fq9h6
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-fq9h6
Aug 27 18:36:04.405: INFO: Found 0 stateful pods, waiting for 1
Aug 27 18:36:14.420: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 27 18:36:14.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fq9h6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 27 18:36:14.753: INFO: stderr: "I0827 18:36:14.616934    1649 log.go:172] (0xc0001386e0) (0xc000718640) Create stream\nI0827 18:36:14.617014    1649 log.go:172] (0xc0001386e0) (0xc000718640) Stream added, broadcasting: 1\nI0827 18:36:14.619828    1649 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0827 18:36:14.619883    1649 log.go:172] (0xc0001386e0) (0xc0007c4f00) Create stream\nI0827 18:36:14.619901    1649 log.go:172] (0xc0001386e0) (0xc0007c4f00) Stream added, broadcasting: 3\nI0827 18:36:14.621057    1649 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0827 18:36:14.621107    1649 log.go:172] (0xc0001386e0) (0xc0007186e0) Create stream\nI0827 18:36:14.621119    1649 log.go:172] (0xc0001386e0) (0xc0007186e0) Stream added, broadcasting: 5\nI0827 18:36:14.622062    1649 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0827 18:36:14.743853    1649 log.go:172] (0xc0001386e0) Data frame received for 3\nI0827 18:36:14.743903    1649 log.go:172] (0xc0007c4f00) (3) Data frame handling\nI0827 18:36:14.743970    1649 log.go:172] (0xc0007c4f00) (3) Data frame sent\nI0827 18:36:14.744010    1649 log.go:172] (0xc0001386e0) Data frame received for 3\nI0827 18:36:14.744040    1649 log.go:172] (0xc0007c4f00) (3) Data frame handling\nI0827 18:36:14.744095    1649 log.go:172] (0xc0001386e0) Data frame received for 5\nI0827 18:36:14.744229    1649 log.go:172] (0xc0007186e0) (5) Data frame handling\nI0827 18:36:14.746401    1649 log.go:172] (0xc0001386e0) Data frame received for 1\nI0827 18:36:14.746460    1649 log.go:172] (0xc000718640) (1) Data frame handling\nI0827 18:36:14.746500    1649 log.go:172] (0xc000718640) (1) Data frame sent\nI0827 18:36:14.746560    1649 log.go:172] (0xc0001386e0) (0xc000718640) Stream removed, broadcasting: 1\nI0827 18:36:14.746591    1649 log.go:172] (0xc0001386e0) Go away received\nI0827 18:36:14.746849    1649 log.go:172] (0xc0001386e0) (0xc000718640) Stream removed, broadcasting: 1\nI0827 18:36:14.746901    1649 log.go:172] (0xc0001386e0) (0xc0007c4f00) Stream removed, broadcasting: 3\nI0827 18:36:14.746921    1649 log.go:172] (0xc0001386e0) (0xc0007186e0) Stream removed, broadcasting: 5\n"
Aug 27 18:36:14.753: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 27 18:36:14.753: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 27 18:36:14.757: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 27 18:36:24.762: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 18:36:24.762: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 18:36:24.785: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999281s
Aug 27 18:36:25.790: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989097676s
Aug 27 18:36:26.795: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98378872s
Aug 27 18:36:27.815: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.978819903s
Aug 27 18:36:28.820: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.958979691s
Aug 27 18:36:29.825: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.954638677s
Aug 27 18:36:30.830: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.948857049s
Aug 27 18:36:31.834: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.944555421s
Aug 27 18:36:32.851: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.940344684s
Aug 27 18:36:33.854: INFO: Verifying statefulset ss doesn't scale past 1 for another 923.273877ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-fq9h6
Aug 27 18:36:34.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fq9h6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 18:36:35.087: INFO: stderr: "I0827 18:36:34.989852    1671 log.go:172] (0xc00078a160) (0xc0006f4640) Create stream\nI0827 18:36:34.989898    1671 log.go:172] (0xc00078a160) (0xc0006f4640) Stream added, broadcasting: 1\nI0827 18:36:34.991952    1671 log.go:172] (0xc00078a160) Reply frame received for 1\nI0827 18:36:34.991986    1671 log.go:172] (0xc00078a160) (0xc00029ed20) Create stream\nI0827 18:36:34.991999    1671 log.go:172] (0xc00078a160) (0xc00029ed20) Stream added, broadcasting: 3\nI0827 18:36:34.992959    1671 log.go:172] (0xc00078a160) Reply frame received for 3\nI0827 18:36:34.993019    1671 log.go:172] (0xc00078a160) (0xc0002e6000) Create stream\nI0827 18:36:34.993052    1671 log.go:172] (0xc00078a160) (0xc0002e6000) Stream added, broadcasting: 5\nI0827 18:36:34.993853    1671 log.go:172] (0xc00078a160) Reply frame received for 5\nI0827 18:36:35.078599    1671 log.go:172] (0xc00078a160) Data frame received for 5\nI0827 18:36:35.078639    1671 log.go:172] (0xc0002e6000) (5) Data frame handling\nI0827 18:36:35.078662    1671 log.go:172] (0xc00078a160) Data frame received for 3\nI0827 18:36:35.078670    1671 log.go:172] (0xc00029ed20) (3) Data frame handling\nI0827 18:36:35.078679    1671 log.go:172] (0xc00029ed20) (3) Data frame sent\nI0827 18:36:35.078690    1671 log.go:172] (0xc00078a160) Data frame received for 3\nI0827 18:36:35.078697    1671 log.go:172] (0xc00029ed20) (3) Data frame handling\nI0827 18:36:35.080898    1671 log.go:172] (0xc00078a160) Data frame received for 1\nI0827 18:36:35.080920    1671 log.go:172] (0xc0006f4640) (1) Data frame handling\nI0827 18:36:35.080944    1671 log.go:172] (0xc0006f4640) (1) Data frame sent\nI0827 18:36:35.080958    1671 log.go:172] (0xc00078a160) (0xc0006f4640) Stream removed, broadcasting: 1\nI0827 18:36:35.080967    1671 log.go:172] (0xc00078a160) Go away received\nI0827 18:36:35.081111    1671 log.go:172] (0xc00078a160) (0xc0006f4640) Stream removed, broadcasting: 1\nI0827 18:36:35.081127    1671 log.go:172] (0xc00078a160) (0xc00029ed20) Stream removed, broadcasting: 3\nI0827 18:36:35.081135    1671 log.go:172] (0xc00078a160) (0xc0002e6000) Stream removed, broadcasting: 5\n"
Aug 27 18:36:35.087: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 27 18:36:35.087: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 27 18:36:35.089: INFO: Found 1 stateful pods, waiting for 3
Aug 27 18:36:45.097: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:36:45.097: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:36:45.097: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 27 18:36:55.094: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:36:55.094: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 18:36:55.094: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 27 18:36:55.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fq9h6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 27 18:36:55.358: INFO: stderr: "I0827 18:36:55.253234    1693 log.go:172] (0xc000138160) (0xc0005d61e0) Create stream\nI0827 18:36:55.253336    1693 log.go:172] (0xc000138160) (0xc0005d61e0) Stream added, broadcasting: 1\nI0827 18:36:55.255874    1693 log.go:172] (0xc000138160) Reply frame received for 1\nI0827 18:36:55.255937    1693 log.go:172] (0xc000138160) (0xc000486b40) Create stream\nI0827 18:36:55.255963    1693 log.go:172] (0xc000138160) (0xc000486b40) Stream added, broadcasting: 3\nI0827 18:36:55.257176    1693 log.go:172] (0xc000138160) Reply frame received for 3\nI0827 18:36:55.257236    1693 log.go:172] (0xc000138160) (0xc0001ee000) Create stream\nI0827 18:36:55.257261    1693 log.go:172] (0xc000138160) (0xc0001ee000) Stream added, broadcasting: 5\nI0827 18:36:55.258344    1693 log.go:172] (0xc000138160) Reply frame received for 5\nI0827 18:36:55.348470    1693 log.go:172] (0xc000138160) Data frame received for 5\nI0827 18:36:55.348508    1693 log.go:172] (0xc0001ee000) (5) Data frame handling\nI0827 18:36:55.348539    1693 log.go:172] (0xc000138160) Data frame received for 3\nI0827 18:36:55.348550    1693 log.go:172] (0xc000486b40) (3) Data frame handling\nI0827 18:36:55.348558    1693 log.go:172] (0xc000486b40) (3) Data frame sent\nI0827 18:36:55.348564    1693 log.go:172] (0xc000138160) Data frame received for 3\nI0827 18:36:55.348569    1693 log.go:172] (0xc000486b40) (3) Data frame handling\nI0827 18:36:55.350088    1693 log.go:172] (0xc000138160) Data frame received for 1\nI0827 18:36:55.350114    1693 log.go:172] (0xc0005d61e0) (1) Data frame handling\nI0827 18:36:55.350129    1693 log.go:172] (0xc0005d61e0) (1) Data frame sent\nI0827 18:36:55.350147    1693 log.go:172] (0xc000138160) (0xc0005d61e0) Stream removed, broadcasting: 1\nI0827 18:36:55.350170    1693 log.go:172] (0xc000138160) Go away received\nI0827 18:36:55.350390    1693 log.go:172] (0xc000138160) (0xc0005d61e0) Stream removed, broadcasting: 1\nI0827 18:36:55.350422    1693 log.go:172] (0xc000138160) (0xc000486b40) Stream removed, broadcasting: 3\nI0827 18:36:55.350438    1693 log.go:172] (0xc000138160) (0xc0001ee000) Stream removed, broadcasting: 5\n"
Aug 27 18:36:55.358: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 27 18:36:55.358: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 27 18:36:55.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fq9h6 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 27 18:36:55.597: INFO: stderr: "I0827 18:36:55.479318    1715 log.go:172] (0xc000138840) (0xc0005cb220) Create stream\nI0827 18:36:55.479369    1715 log.go:172] (0xc000138840) (0xc0005cb220) Stream added, broadcasting: 1\nI0827 18:36:55.481350    1715 log.go:172] (0xc000138840) Reply frame received for 1\nI0827 18:36:55.481395    1715 log.go:172] (0xc000138840) (0xc000760000) Create stream\nI0827 18:36:55.481407    1715 log.go:172] (0xc000138840) (0xc000760000) Stream added, broadcasting: 3\nI0827 18:36:55.482103    1715 log.go:172] (0xc000138840) Reply frame received for 3\nI0827 18:36:55.482131    1715 log.go:172] (0xc000138840) (0xc00059c000) Create stream\nI0827 18:36:55.482139    1715 log.go:172] (0xc000138840) (0xc00059c000) Stream added, broadcasting: 5\nI0827 18:36:55.482713    1715 log.go:172] (0xc000138840) Reply frame received for 5\nI0827 18:36:55.585448    1715 log.go:172] (0xc000138840) Data frame received for 3\nI0827 18:36:55.585476    1715 log.go:172] (0xc000760000) (3) Data frame handling\nI0827 18:36:55.585488    1715 log.go:172] (0xc000760000) (3) Data frame sent\nI0827 18:36:55.585494    1715 log.go:172] (0xc000138840) Data frame received for 3\nI0827 18:36:55.585501    1715 log.go:172] (0xc000760000) (3) Data frame handling\nI0827 18:36:55.585736    1715 log.go:172] (0xc000138840) Data frame received for 5\nI0827 18:36:55.585761    1715 log.go:172] (0xc00059c000) (5) Data frame handling\nI0827 18:36:55.587587    1715 log.go:172] (0xc000138840) Data frame received for 1\nI0827 18:36:55.587609    1715 log.go:172] (0xc0005cb220) (1) Data frame handling\nI0827 18:36:55.587621    1715 log.go:172] (0xc0005cb220) (1) Data frame sent\nI0827 18:36:55.587635    1715 log.go:172] (0xc000138840) (0xc0005cb220) Stream removed, broadcasting: 1\nI0827 18:36:55.587649    1715 log.go:172] (0xc000138840) Go away received\nI0827 18:36:55.587858    1715 log.go:172] (0xc000138840) (0xc0005cb220) Stream removed, broadcasting: 1\nI0827 18:36:55.587877    1715 log.go:172] (0xc000138840) (0xc000760000) Stream removed, broadcasting: 3\nI0827 18:36:55.587888    1715 log.go:172] (0xc000138840) (0xc00059c000) Stream removed, broadcasting: 5\n"
Aug 27 18:36:55.597: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 27 18:36:55.597: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 27 18:36:55.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fq9h6 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 27 18:36:55.836: INFO: stderr: "I0827 18:36:55.728527    1738 log.go:172] (0xc000138840) (0xc00079f4a0) Create stream\nI0827 18:36:55.728583    1738 log.go:172] (0xc000138840) (0xc00079f4a0) Stream added, broadcasting: 1\nI0827 18:36:55.731428    1738 log.go:172] (0xc000138840) Reply frame received for 1\nI0827 18:36:55.731474    1738 log.go:172] (0xc000138840) (0xc000730000) Create stream\nI0827 18:36:55.731504    1738 log.go:172] (0xc000138840) (0xc000730000) Stream added, broadcasting: 3\nI0827 18:36:55.732429    1738 log.go:172] (0xc000138840) Reply frame received for 3\nI0827 18:36:55.732481    1738 log.go:172] (0xc000138840) (0xc00079f540) Create stream\nI0827 18:36:55.732508    1738 log.go:172] (0xc000138840) (0xc00079f540) Stream added, broadcasting: 5\nI0827 18:36:55.733594    1738 log.go:172] (0xc000138840) Reply frame received for 5\nI0827 18:36:55.827039    1738 log.go:172] (0xc000138840) Data frame received for 3\nI0827 18:36:55.827071    1738 log.go:172] (0xc000730000) (3) Data frame handling\nI0827 18:36:55.827087    1738 log.go:172] (0xc000730000) (3) Data frame sent\nI0827 18:36:55.827663    1738 log.go:172] (0xc000138840) Data frame received for 3\nI0827 18:36:55.827690    1738 log.go:172] (0xc000730000) (3) Data frame handling\nI0827 18:36:55.827709    1738 log.go:172] (0xc000138840) Data frame received for 5\nI0827 18:36:55.827716    1738 log.go:172] (0xc00079f540) (5) Data frame handling\nI0827 18:36:55.829515    1738 log.go:172] (0xc000138840) Data frame received for 1\nI0827 18:36:55.829539    1738 log.go:172] (0xc00079f4a0) (1) Data frame handling\nI0827 18:36:55.829565    1738 log.go:172] (0xc00079f4a0) (1) Data frame sent\nI0827 18:36:55.829748    1738 log.go:172] (0xc000138840) (0xc00079f4a0) Stream removed, broadcasting: 1\nI0827 18:36:55.829781    1738 log.go:172] (0xc000138840) Go away received\nI0827 18:36:55.830010    1738 log.go:172] (0xc000138840) (0xc00079f4a0) Stream removed, broadcasting: 1\nI0827 18:36:55.830045    1738 log.go:172] (0xc000138840) (0xc000730000) Stream removed, broadcasting: 3\nI0827 18:36:55.830062    1738 log.go:172] (0xc000138840) (0xc00079f540) Stream removed, broadcasting: 5\n"
Aug 27 18:36:55.836: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 27 18:36:55.836: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 27 18:36:55.836: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 18:36:55.876: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug 27 18:37:05.889: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 18:37:05.889: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 18:37:05.889: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 18:37:05.905: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999976s
Aug 27 18:37:06.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98934459s
Aug 27 18:37:07.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981150056s
Aug 27 18:37:08.923: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97609208s
Aug 27 18:37:09.927: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.970947438s
Aug 27 18:37:10.931: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.967034963s
Aug 27 18:37:11.935: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.962745365s
Aug 27 18:37:12.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.95925292s
Aug 27 18:37:13.943: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.955440271s
Aug 27 18:37:14.989: INFO: Verifying statefulset ss doesn't scale past 3 for another 950.953676ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-fq9h6
Aug 27 18:37:15.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fq9h6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 18:37:16.189: INFO: stderr: "I0827 18:37:16.108790    1760 log.go:172] (0xc0001386e0) (0xc00070e640) Create stream\nI0827 18:37:16.108853    1760 log.go:172] (0xc0001386e0) (0xc00070e640) Stream added, broadcasting: 1\nI0827 18:37:16.111254    1760 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0827 18:37:16.111296    1760 log.go:172] (0xc0001386e0) (0xc0007dae60) Create stream\nI0827 18:37:16.111310    1760 log.go:172] (0xc0001386e0) (0xc0007dae60) Stream added, broadcasting: 3\nI0827 18:37:16.112100    1760 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0827 18:37:16.112130    1760 log.go:172] (0xc0001386e0) (0xc00027a000) Create stream\nI0827 18:37:16.112142    1760 log.go:172] (0xc0001386e0) (0xc00027a000) Stream added, broadcasting: 5\nI0827 18:37:16.113232    1760 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0827 18:37:16.183317    1760 log.go:172] (0xc0001386e0) Data frame received for 5\nI0827 18:37:16.183351    1760 log.go:172] (0xc00027a000) (5) Data frame handling\nI0827 18:37:16.183371    1760 log.go:172] (0xc0001386e0) Data frame received for 3\nI0827 18:37:16.183380    1760 log.go:172] (0xc0007dae60) (3) Data frame handling\nI0827 18:37:16.183390    1760 log.go:172] (0xc0007dae60) (3) Data frame sent\nI0827 18:37:16.183494    1760 log.go:172] (0xc0001386e0) Data frame received for 3\nI0827 18:37:16.183527    1760 log.go:172] (0xc0007dae60) (3) Data frame handling\nI0827 18:37:16.184606    1760 log.go:172] (0xc0001386e0) Data frame received for 1\nI0827 18:37:16.184620    1760 log.go:172] (0xc00070e640) (1) Data frame handling\nI0827 18:37:16.184627    1760 log.go:172] (0xc00070e640) (1) Data frame sent\nI0827 18:37:16.184635    1760 log.go:172] (0xc0001386e0) (0xc00070e640) Stream removed, broadcasting: 1\nI0827 18:37:16.184656    1760 log.go:172] (0xc0001386e0) Go away received\nI0827 18:37:16.184850    1760 log.go:172] (0xc0001386e0) (0xc00070e640) Stream removed, broadcasting: 1\nI0827 18:37:16.184863    1760 log.go:172] (0xc0001386e0) (0xc0007dae60) Stream removed, broadcasting: 3\nI0827 18:37:16.184870    1760 log.go:172] (0xc0001386e0) (0xc00027a000) Stream removed, broadcasting: 5\n"
Aug 27 18:37:16.189: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 27 18:37:16.189: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 27 18:37:16.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fq9h6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 18:37:16.378: INFO: stderr: "I0827 18:37:16.305617    1783 log.go:172] (0xc000138580) (0xc0002a2e60) Create stream\nI0827 18:37:16.305655    1783 log.go:172] (0xc000138580) (0xc0002a2e60) Stream added, broadcasting: 1\nI0827 18:37:16.315785    1783 log.go:172] (0xc000138580) Reply frame received for 1\nI0827 18:37:16.315819    1783 log.go:172] (0xc000138580) (0xc0006ca000) Create stream\nI0827 18:37:16.315826    1783 log.go:172] (0xc000138580) (0xc0006ca000) Stream added, broadcasting: 3\nI0827 18:37:16.316333    1783 log.go:172] (0xc000138580) Reply frame received for 3\nI0827 18:37:16.316351    1783 log.go:172] (0xc000138580) (0xc0002a2fa0) Create stream\nI0827 18:37:16.316356    1783 log.go:172] (0xc000138580) (0xc0002a2fa0) Stream added, broadcasting: 5\nI0827 18:37:16.318102    1783 log.go:172] (0xc000138580) Reply frame received for 5\nI0827 18:37:16.370551    1783 log.go:172] (0xc000138580) Data frame received for 5\nI0827 18:37:16.370586    1783 log.go:172] (0xc000138580) Data frame received for 3\nI0827 18:37:16.370609    1783 log.go:172] (0xc0006ca000) (3) Data frame handling\nI0827 18:37:16.370620    1783 log.go:172] (0xc0006ca000) (3) Data frame sent\nI0827 18:37:16.370656    1783 log.go:172] (0xc0002a2fa0) (5) Data frame handling\nI0827 18:37:16.370734    1783 log.go:172] (0xc000138580) Data frame received for 3\nI0827 18:37:16.370769    1783 log.go:172] (0xc0006ca000) (3) Data frame handling\nI0827 18:37:16.372389    1783 log.go:172] (0xc000138580) Data frame received for 1\nI0827 18:37:16.372414    1783 log.go:172] (0xc0002a2e60) (1) Data frame handling\nI0827 18:37:16.372430    1783 log.go:172] (0xc0002a2e60) (1) Data frame sent\nI0827 18:37:16.372448    1783 log.go:172] (0xc000138580) (0xc0002a2e60) Stream removed, broadcasting: 1\nI0827 18:37:16.372476    1783 log.go:172] (0xc000138580) Go away received\nI0827 18:37:16.372690    1783 log.go:172] (0xc000138580) (0xc0002a2e60) Stream removed, broadcasting: 1\nI0827 18:37:16.372717    1783 log.go:172] (0xc000138580) (0xc0006ca000) Stream removed, broadcasting: 3\nI0827 18:37:16.372813    1783 log.go:172] (0xc000138580) (0xc0002a2fa0) Stream removed, broadcasting: 5\n"
Aug 27 18:37:16.378: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 27 18:37:16.378: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 27 18:37:16.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fq9h6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 18:37:16.559: INFO: stderr: "I0827 18:37:16.484611    1804 log.go:172] (0xc000138790) (0xc00074e640) Create stream\nI0827 18:37:16.484668    1804 log.go:172] (0xc000138790) (0xc00074e640) Stream added, broadcasting: 1\nI0827 18:37:16.487614    1804 log.go:172] (0xc000138790) Reply frame received for 1\nI0827 18:37:16.487669    1804 log.go:172] (0xc000138790) (0xc000670dc0) Create stream\nI0827 18:37:16.487704    1804 log.go:172] (0xc000138790) (0xc000670dc0) Stream added, broadcasting: 3\nI0827 18:37:16.489119    1804 log.go:172] (0xc000138790) Reply frame received for 3\nI0827 18:37:16.489154    1804 log.go:172] (0xc000138790) (0xc00074e6e0) Create stream\nI0827 18:37:16.489167    1804 log.go:172] (0xc000138790) (0xc00074e6e0) Stream added, broadcasting: 5\nI0827 18:37:16.490451    1804 log.go:172] (0xc000138790) Reply frame received for 5\nI0827 18:37:16.550010    1804 log.go:172] (0xc000138790) Data frame received for 5\nI0827 18:37:16.550067    1804 log.go:172] (0xc00074e6e0) (5) Data frame handling\nI0827 18:37:16.550103    1804 log.go:172] (0xc000138790) Data frame received for 3\nI0827 18:37:16.550128    1804 log.go:172] (0xc000670dc0) (3) Data frame handling\nI0827 18:37:16.550153    1804 log.go:172] (0xc000670dc0) (3) Data frame sent\nI0827 18:37:16.550203    1804 log.go:172] (0xc000138790) Data frame received for 3\nI0827 18:37:16.550234    1804 log.go:172] (0xc000670dc0) (3) Data frame handling\nI0827 18:37:16.551255    1804 log.go:172] (0xc000138790) Data frame received for 1\nI0827 18:37:16.551277    1804 log.go:172] (0xc00074e640) (1) Data frame handling\nI0827 18:37:16.551293    1804 log.go:172] (0xc00074e640) (1) Data frame sent\nI0827 18:37:16.551308    1804 log.go:172] (0xc000138790) (0xc00074e640) Stream removed, broadcasting: 1\nI0827 18:37:16.551343    1804 log.go:172] (0xc000138790) Go away received\nI0827 18:37:16.551583    1804 log.go:172] (0xc000138790) (0xc00074e640) Stream removed, broadcasting: 1\nI0827 18:37:16.551642    1804 log.go:172] (0xc000138790) (0xc000670dc0) Stream removed, broadcasting: 3\nI0827 18:37:16.551666    1804 log.go:172] (0xc000138790) (0xc00074e6e0) Stream removed, broadcasting: 5\n"
Aug 27 18:37:16.559: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 27 18:37:16.559: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 27 18:37:16.559: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug 27 18:37:56.570: INFO: Deleting all statefulset in ns e2e-tests-statefulset-fq9h6
Aug 27 18:37:56.574: INFO: Scaling statefulset ss to 0
Aug 27 18:37:56.583: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 18:37:56.586: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:37:56.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-fq9h6" for this suite.
Aug 27 18:38:02.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:38:02.688: INFO: namespace: e2e-tests-statefulset-fq9h6, resource: bindings, ignored listing per whitelist
Aug 27 18:38:02.749: INFO: namespace e2e-tests-statefulset-fq9h6 deletion completed in 6.141256438s

• [SLOW TEST:118.448 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:38:02.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-70928718-e894-11ea-b58c-0242ac11000b
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:38:15.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-w5t4k" for this suite.
Aug 27 18:38:39.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:38:39.527: INFO: namespace: e2e-tests-configmap-w5t4k, resource: bindings, ignored listing per whitelist
Aug 27 18:38:39.583: INFO: namespace e2e-tests-configmap-w5t4k deletion completed in 24.271078882s

• [SLOW TEST:36.833 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:38:39.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-88s5l
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-88s5l to expose endpoints map[]
Aug 27 18:38:40.008: INFO: Get endpoints failed (66.983321ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 27 18:38:41.011: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-88s5l exposes endpoints map[] (1.069806371s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-88s5l
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-88s5l to expose endpoints map[pod1:[80]]
Aug 27 18:38:45.128: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-88s5l exposes endpoints map[pod1:[80]] (4.113191247s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-88s5l
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-88s5l to expose endpoints map[pod2:[80] pod1:[80]]
Aug 27 18:38:49.537: INFO: Unexpected endpoints: found map[873fd3cb-e894-11ea-a485-0242ac120004:[80]], expected map[pod1:[80] pod2:[80]] (4.403307021s elapsed, will retry)
Aug 27 18:38:50.547: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-88s5l exposes endpoints map[pod1:[80] pod2:[80]] (5.413734219s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-88s5l
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-88s5l to expose endpoints map[pod2:[80]]
Aug 27 18:38:51.621: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-88s5l exposes endpoints map[pod2:[80]] (1.068624477s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-88s5l
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-88s5l to expose endpoints map[]
Aug 27 18:38:52.652: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-88s5l exposes endpoints map[] (1.027073271s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:38:52.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-88s5l" for this suite.
Aug 27 18:38:58.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:38:59.010: INFO: namespace: e2e-tests-services-88s5l, resource: bindings, ignored listing per whitelist
Aug 27 18:38:59.058: INFO: namespace e2e-tests-services-88s5l deletion completed in 6.088542688s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:19.475 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:38:59.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:39:03.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-4s2mz" for this suite.
Aug 27 18:39:09.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:39:09.521: INFO: namespace: e2e-tests-emptydir-wrapper-4s2mz, resource: bindings, ignored listing per whitelist
Aug 27 18:39:09.603: INFO: namespace e2e-tests-emptydir-wrapper-4s2mz deletion completed in 6.113089328s

• [SLOW TEST:10.545 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:39:09.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 27 18:39:09.723: INFO: Waiting up to 5m0s for pod "pod-98594b89-e894-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-wzwmg" to be "success or failure"
Aug 27 18:39:09.751: INFO: Pod "pod-98594b89-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.349939ms
Aug 27 18:39:11.790: INFO: Pod "pod-98594b89-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066982861s
Aug 27 18:39:13.794: INFO: Pod "pod-98594b89-e894-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070721665s
STEP: Saw pod success
Aug 27 18:39:13.794: INFO: Pod "pod-98594b89-e894-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:39:13.797: INFO: Trying to get logs from node hunter-worker2 pod pod-98594b89-e894-11ea-b58c-0242ac11000b container test-container: 
STEP: delete the pod
Aug 27 18:39:13.821: INFO: Waiting for pod pod-98594b89-e894-11ea-b58c-0242ac11000b to disappear
Aug 27 18:39:13.825: INFO: Pod pod-98594b89-e894-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:39:13.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wzwmg" for this suite.
Aug 27 18:39:19.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:39:19.894: INFO: namespace: e2e-tests-emptydir-wzwmg, resource: bindings, ignored listing per whitelist
Aug 27 18:39:19.919: INFO: namespace e2e-tests-emptydir-wzwmg deletion completed in 6.0899628s

• [SLOW TEST:10.315 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:39:19.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-66b6q/configmap-test-9e879900-e894-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume configMaps
Aug 27 18:39:20.148: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e8d2ecb-e894-11ea-b58c-0242ac11000b" in namespace "e2e-tests-configmap-66b6q" to be "success or failure"
Aug 27 18:39:20.161: INFO: Pod "pod-configmaps-9e8d2ecb-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.06449ms
Aug 27 18:39:22.164: INFO: Pod "pod-configmaps-9e8d2ecb-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016359832s
Aug 27 18:39:24.169: INFO: Pod "pod-configmaps-9e8d2ecb-e894-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.020687749s
Aug 27 18:39:26.172: INFO: Pod "pod-configmaps-9e8d2ecb-e894-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024045302s
STEP: Saw pod success
Aug 27 18:39:26.172: INFO: Pod "pod-configmaps-9e8d2ecb-e894-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:39:26.175: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-9e8d2ecb-e894-11ea-b58c-0242ac11000b container env-test: 
STEP: delete the pod
Aug 27 18:39:26.191: INFO: Waiting for pod pod-configmaps-9e8d2ecb-e894-11ea-b58c-0242ac11000b to disappear
Aug 27 18:39:26.226: INFO: Pod pod-configmaps-9e8d2ecb-e894-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:39:26.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-66b6q" for this suite.
Aug 27 18:39:32.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:39:32.279: INFO: namespace: e2e-tests-configmap-66b6q, resource: bindings, ignored listing per whitelist
Aug 27 18:39:32.341: INFO: namespace e2e-tests-configmap-66b6q deletion completed in 6.110231166s

• [SLOW TEST:12.422 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:39:32.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Aug 27 18:39:32.575: INFO: Waiting up to 5m0s for pod "client-containers-a5fab281-e894-11ea-b58c-0242ac11000b" in namespace "e2e-tests-containers-8w47q" to be "success or failure"
Aug 27 18:39:32.593: INFO: Pod "client-containers-a5fab281-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.602115ms
Aug 27 18:39:35.054: INFO: Pod "client-containers-a5fab281-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47859124s
Aug 27 18:39:37.057: INFO: Pod "client-containers-a5fab281-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482122678s
Aug 27 18:39:39.380: INFO: Pod "client-containers-a5fab281-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.804639392s
Aug 27 18:39:41.384: INFO: Pod "client-containers-a5fab281-e894-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.808507686s
STEP: Saw pod success
Aug 27 18:39:41.384: INFO: Pod "client-containers-a5fab281-e894-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:39:41.387: INFO: Trying to get logs from node hunter-worker2 pod client-containers-a5fab281-e894-11ea-b58c-0242ac11000b container test-container: 
STEP: delete the pod
Aug 27 18:39:41.762: INFO: Waiting for pod client-containers-a5fab281-e894-11ea-b58c-0242ac11000b to disappear
Aug 27 18:39:41.931: INFO: Pod client-containers-a5fab281-e894-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:39:41.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-8w47q" for this suite.
Aug 27 18:39:50.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:39:50.353: INFO: namespace: e2e-tests-containers-8w47q, resource: bindings, ignored listing per whitelist
Aug 27 18:39:50.370: INFO: namespace e2e-tests-containers-8w47q deletion completed in 8.434000648s

• [SLOW TEST:18.029 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:39:50.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-b0fb115b-e894-11ea-b58c-0242ac11000b
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-b0fb115b-e894-11ea-b58c-0242ac11000b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:40:01.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d9dlc" for this suite.
Aug 27 18:40:25.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:40:25.906: INFO: namespace: e2e-tests-projected-d9dlc, resource: bindings, ignored listing per whitelist
Aug 27 18:40:25.938: INFO: namespace e2e-tests-projected-d9dlc deletion completed in 24.418419313s

• [SLOW TEST:35.568 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:40:25.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 27 18:40:26.145: INFO: Waiting up to 5m0s for pod "pod-c5e80440-e894-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-d65cb" to be "success or failure"
Aug 27 18:40:26.242: INFO: Pod "pod-c5e80440-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 96.640934ms
Aug 27 18:40:28.420: INFO: Pod "pod-c5e80440-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.274548244s
Aug 27 18:40:30.424: INFO: Pod "pod-c5e80440-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.278915138s
Aug 27 18:40:32.704: INFO: Pod "pod-c5e80440-e894-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.558745757s
STEP: Saw pod success
Aug 27 18:40:32.704: INFO: Pod "pod-c5e80440-e894-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:40:32.706: INFO: Trying to get logs from node hunter-worker2 pod pod-c5e80440-e894-11ea-b58c-0242ac11000b container test-container: 
STEP: delete the pod
Aug 27 18:40:33.143: INFO: Waiting for pod pod-c5e80440-e894-11ea-b58c-0242ac11000b to disappear
Aug 27 18:40:33.687: INFO: Pod pod-c5e80440-e894-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:40:33.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-d65cb" for this suite.
Aug 27 18:40:42.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:40:42.427: INFO: namespace: e2e-tests-emptydir-d65cb, resource: bindings, ignored listing per whitelist
Aug 27 18:40:42.467: INFO: namespace e2e-tests-emptydir-d65cb deletion completed in 8.445025572s

• [SLOW TEST:16.529 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:40:42.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Aug 27 18:40:42.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8mrnw'
Aug 27 18:40:42.836: INFO: stderr: ""
Aug 27 18:40:42.836: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 27 18:40:43.858: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:40:43.858: INFO: Found 0 / 1
Aug 27 18:40:44.891: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:40:44.891: INFO: Found 0 / 1
Aug 27 18:40:45.944: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:40:45.944: INFO: Found 0 / 1
Aug 27 18:40:47.076: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:40:47.076: INFO: Found 1 / 1
Aug 27 18:40:47.076: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 27 18:40:47.079: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:40:47.079: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 27 18:40:47.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-bnhds --namespace=e2e-tests-kubectl-8mrnw -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 27 18:40:47.429: INFO: stderr: ""
Aug 27 18:40:47.429: INFO: stdout: "pod/redis-master-bnhds patched\n"
STEP: checking annotations
Aug 27 18:40:47.656: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 18:40:47.656: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:40:47.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8mrnw" for this suite.
Aug 27 18:41:10.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:41:10.065: INFO: namespace: e2e-tests-kubectl-8mrnw, resource: bindings, ignored listing per whitelist
Aug 27 18:41:10.077: INFO: namespace e2e-tests-kubectl-8mrnw deletion completed in 22.417256541s

• [SLOW TEST:27.609 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:41:10.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Aug 27 18:41:17.681: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:41:43.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-74pg8" for this suite.
Aug 27 18:41:51.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:41:51.752: INFO: namespace: e2e-tests-namespaces-74pg8, resource: bindings, ignored listing per whitelist
Aug 27 18:41:51.793: INFO: namespace e2e-tests-namespaces-74pg8 deletion completed in 8.291810526s
STEP: Destroying namespace "e2e-tests-nsdeletetest-zbq7z" for this suite.
Aug 27 18:41:51.795: INFO: Namespace e2e-tests-nsdeletetest-zbq7z was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-5ghtg" for this suite.
Aug 27 18:41:57.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:41:57.903: INFO: namespace: e2e-tests-nsdeletetest-5ghtg, resource: bindings, ignored listing per whitelist
Aug 27 18:41:58.001: INFO: namespace e2e-tests-nsdeletetest-5ghtg deletion completed in 6.205341002s

• [SLOW TEST:47.924 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:41:58.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-fd0bb594-e894-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 18:41:58.708: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fd0db751-e894-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-rwj8l" to be "success or failure"
Aug 27 18:41:58.860: INFO: Pod "pod-projected-secrets-fd0db751-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 152.435235ms
Aug 27 18:42:00.915: INFO: Pod "pod-projected-secrets-fd0db751-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206702361s
Aug 27 18:42:02.917: INFO: Pod "pod-projected-secrets-fd0db751-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209373999s
Aug 27 18:42:05.059: INFO: Pod "pod-projected-secrets-fd0db751-e894-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.350559186s
Aug 27 18:42:07.196: INFO: Pod "pod-projected-secrets-fd0db751-e894-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.48808798s
STEP: Saw pod success
Aug 27 18:42:07.196: INFO: Pod "pod-projected-secrets-fd0db751-e894-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:42:07.199: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-fd0db751-e894-11ea-b58c-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Aug 27 18:42:07.670: INFO: Waiting for pod pod-projected-secrets-fd0db751-e894-11ea-b58c-0242ac11000b to disappear
Aug 27 18:42:07.861: INFO: Pod pod-projected-secrets-fd0db751-e894-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:42:07.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rwj8l" for this suite.
Aug 27 18:42:20.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:42:21.320: INFO: namespace: e2e-tests-projected-rwj8l, resource: bindings, ignored listing per whitelist
Aug 27 18:42:21.384: INFO: namespace e2e-tests-projected-rwj8l deletion completed in 13.519228399s

• [SLOW TEST:23.383 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:42:21.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 27 18:42:23.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-n2h6d'
Aug 27 18:42:46.996: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 18:42:46.996: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Aug 27 18:42:47.660: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Aug 27 18:42:48.179: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Aug 27 18:42:48.267: INFO: scanned /root for discovery docs: 
Aug 27 18:42:48.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-n2h6d'
Aug 27 18:43:11.820: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 27 18:43:11.820: INFO: stdout: "Created e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f\nScaling up e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Aug 27 18:43:11.820: INFO: stdout: "Created e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f\nScaling up e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Aug 27 18:43:11.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-n2h6d'
Aug 27 18:43:11.937: INFO: stderr: ""
Aug 27 18:43:11.937: INFO: stdout: "e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f-6v2x5 "
Aug 27 18:43:11.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f-6v2x5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n2h6d'
Aug 27 18:43:12.077: INFO: stderr: ""
Aug 27 18:43:12.077: INFO: stdout: "true"
Aug 27 18:43:12.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f-6v2x5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n2h6d'
Aug 27 18:43:12.193: INFO: stderr: ""
Aug 27 18:43:12.193: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Aug 27 18:43:12.193: INFO: e2e-test-nginx-rc-37e991af2c087512daee13fedd23d32f-6v2x5 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Aug 27 18:43:12.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-n2h6d'
Aug 27 18:43:12.318: INFO: stderr: ""
Aug 27 18:43:12.318: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:43:12.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-n2h6d" for this suite.
Aug 27 18:43:22.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:43:22.499: INFO: namespace: e2e-tests-kubectl-n2h6d, resource: bindings, ignored listing per whitelist
Aug 27 18:43:22.501: INFO: namespace e2e-tests-kubectl-n2h6d deletion completed in 10.159512293s

• [SLOW TEST:61.117 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:43:22.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 27 18:43:22.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-w4dkc'
Aug 27 18:43:22.789: INFO: stderr: ""
Aug 27 18:43:22.789: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Aug 27 18:43:27.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-w4dkc -o json'
Aug 27 18:43:27.937: INFO: stderr: ""
Aug 27 18:43:27.937: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-27T18:43:22Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-w4dkc\",\n        \"resourceVersion\": \"2700605\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-w4dkc/pods/e2e-test-nginx-pod\",\n        \"uid\": \"2f30af36-e895-11ea-a485-0242ac120004\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-6qjqr\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-6qjqr\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-6qjqr\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-27T18:43:22Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-27T18:43:26Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-27T18:43:26Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-27T18:43:22Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://28792a07c6a0a6126ada330f10b1dfd4fc97e389eeca1949e364adee83cc6152\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-27T18:43:26Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.2\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.223\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-27T18:43:22Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 27 18:43:27.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-w4dkc'
Aug 27 18:43:28.208: INFO: stderr: ""
Aug 27 18:43:28.208: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Aug 27 18:43:28.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-w4dkc'
Aug 27 18:43:38.107: INFO: stderr: ""
Aug 27 18:43:38.107: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:43:38.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w4dkc" for this suite.
Aug 27 18:43:44.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:43:44.136: INFO: namespace: e2e-tests-kubectl-w4dkc, resource: bindings, ignored listing per whitelist
Aug 27 18:43:44.197: INFO: namespace e2e-tests-kubectl-w4dkc deletion completed in 6.087041763s

• [SLOW TEST:21.696 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:43:44.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Aug 27 18:43:44.308: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-cqdp6" to be "success or failure"
Aug 27 18:43:44.325: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.219722ms
Aug 27 18:43:46.329: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021380192s
Aug 27 18:43:48.383: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074654602s
Aug 27 18:43:50.529: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220869137s
Aug 27 18:43:52.532: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.224210166s
Aug 27 18:43:54.536: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.228358484s
STEP: Saw pod success
Aug 27 18:43:54.536: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 27 18:43:54.539: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 27 18:43:54.863: INFO: Waiting for pod pod-host-path-test to disappear
Aug 27 18:43:55.089: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:43:55.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-cqdp6" for this suite.
Aug 27 18:44:01.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:44:01.187: INFO: namespace: e2e-tests-hostpath-cqdp6, resource: bindings, ignored listing per whitelist
Aug 27 18:44:01.194: INFO: namespace e2e-tests-hostpath-cqdp6 deletion completed in 6.100059667s

• [SLOW TEST:16.997 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:44:01.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-5ndbv/secret-test-463696b9-e895-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 18:44:01.480: INFO: Waiting up to 5m0s for pod "pod-configmaps-463dbb05-e895-11ea-b58c-0242ac11000b" in namespace "e2e-tests-secrets-5ndbv" to be "success or failure"
Aug 27 18:44:01.554: INFO: Pod "pod-configmaps-463dbb05-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 73.997504ms
Aug 27 18:44:03.558: INFO: Pod "pod-configmaps-463dbb05-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077495909s
Aug 27 18:44:05.561: INFO: Pod "pod-configmaps-463dbb05-e895-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.081267321s
Aug 27 18:44:07.565: INFO: Pod "pod-configmaps-463dbb05-e895-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085111305s
STEP: Saw pod success
Aug 27 18:44:07.565: INFO: Pod "pod-configmaps-463dbb05-e895-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:44:07.568: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-463dbb05-e895-11ea-b58c-0242ac11000b container env-test: 
STEP: delete the pod
Aug 27 18:44:07.606: INFO: Waiting for pod pod-configmaps-463dbb05-e895-11ea-b58c-0242ac11000b to disappear
Aug 27 18:44:07.617: INFO: Pod pod-configmaps-463dbb05-e895-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:44:07.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5ndbv" for this suite.
Aug 27 18:44:13.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:44:13.695: INFO: namespace: e2e-tests-secrets-5ndbv, resource: bindings, ignored listing per whitelist
Aug 27 18:44:13.708: INFO: namespace e2e-tests-secrets-5ndbv deletion completed in 6.079693102s

• [SLOW TEST:12.513 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:44:13.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 18:44:13.890: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4da65bd0-e895-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-7xm2x" to be "success or failure"
Aug 27 18:44:13.894: INFO: Pod "downwardapi-volume-4da65bd0-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.450081ms
Aug 27 18:44:15.940: INFO: Pod "downwardapi-volume-4da65bd0-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04950946s
Aug 27 18:44:18.083: INFO: Pod "downwardapi-volume-4da65bd0-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192898771s
Aug 27 18:44:20.359: INFO: Pod "downwardapi-volume-4da65bd0-e895-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.468820674s
STEP: Saw pod success
Aug 27 18:44:20.359: INFO: Pod "downwardapi-volume-4da65bd0-e895-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:44:20.362: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4da65bd0-e895-11ea-b58c-0242ac11000b container client-container: 
STEP: delete the pod
Aug 27 18:44:20.752: INFO: Waiting for pod downwardapi-volume-4da65bd0-e895-11ea-b58c-0242ac11000b to disappear
Aug 27 18:44:21.083: INFO: Pod downwardapi-volume-4da65bd0-e895-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:44:21.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7xm2x" for this suite.
Aug 27 18:44:29.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:44:29.147: INFO: namespace: e2e-tests-projected-7xm2x, resource: bindings, ignored listing per whitelist
Aug 27 18:44:29.193: INFO: namespace e2e-tests-projected-7xm2x deletion completed in 8.105403714s

• [SLOW TEST:15.485 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:44:29.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 18:44:29.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56d90fb6-e895-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-hb2q8" to be "success or failure"
Aug 27 18:44:29.324: INFO: Pod "downwardapi-volume-56d90fb6-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.876866ms
Aug 27 18:44:31.328: INFO: Pod "downwardapi-volume-56d90fb6-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005547018s
Aug 27 18:44:33.331: INFO: Pod "downwardapi-volume-56d90fb6-e895-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.009474866s
Aug 27 18:44:35.336: INFO: Pod "downwardapi-volume-56d90fb6-e895-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014001904s
STEP: Saw pod success
Aug 27 18:44:35.336: INFO: Pod "downwardapi-volume-56d90fb6-e895-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:44:35.340: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-56d90fb6-e895-11ea-b58c-0242ac11000b container client-container: 
STEP: delete the pod
Aug 27 18:44:35.363: INFO: Waiting for pod downwardapi-volume-56d90fb6-e895-11ea-b58c-0242ac11000b to disappear
Aug 27 18:44:35.410: INFO: Pod downwardapi-volume-56d90fb6-e895-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:44:35.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hb2q8" for this suite.
Aug 27 18:44:41.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:44:41.483: INFO: namespace: e2e-tests-projected-hb2q8, resource: bindings, ignored listing per whitelist
Aug 27 18:44:41.503: INFO: namespace e2e-tests-projected-hb2q8 deletion completed in 6.089588682s

• [SLOW TEST:12.309 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:44:41.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 27 18:44:48.823: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5e32a155-e895-11ea-b58c-0242ac11000b"
Aug 27 18:44:48.824: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5e32a155-e895-11ea-b58c-0242ac11000b" in namespace "e2e-tests-pods-jgbhq" to be "terminated due to deadline exceeded"
Aug 27 18:44:49.600: INFO: Pod "pod-update-activedeadlineseconds-5e32a155-e895-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 776.779939ms
Aug 27 18:44:51.677: INFO: Pod "pod-update-activedeadlineseconds-5e32a155-e895-11ea-b58c-0242ac11000b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.853360331s
Aug 27 18:44:51.677: INFO: Pod "pod-update-activedeadlineseconds-5e32a155-e895-11ea-b58c-0242ac11000b" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:44:51.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jgbhq" for this suite.
Aug 27 18:44:57.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:44:57.976: INFO: namespace: e2e-tests-pods-jgbhq, resource: bindings, ignored listing per whitelist
Aug 27 18:44:59.340: INFO: namespace e2e-tests-pods-jgbhq deletion completed in 7.65969381s

• [SLOW TEST:17.837 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:44:59.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 18:45:00.815: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"695d1d58-e895-11ea-a485-0242ac120004", Controller:(*bool)(0xc0012d1b3a), BlockOwnerDeletion:(*bool)(0xc0012d1b3b)}}
Aug 27 18:45:01.203: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"69560596-e895-11ea-a485-0242ac120004", Controller:(*bool)(0xc0011959e2), BlockOwnerDeletion:(*bool)(0xc0011959e3)}}
Aug 27 18:45:01.226: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6956a7a2-e895-11ea-a485-0242ac120004", Controller:(*bool)(0xc0009bf78a), BlockOwnerDeletion:(*bool)(0xc0009bf78b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:45:12.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4dk9c" for this suite.
Aug 27 18:45:18.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:45:18.822: INFO: namespace: e2e-tests-gc-4dk9c, resource: bindings, ignored listing per whitelist
Aug 27 18:45:18.877: INFO: namespace e2e-tests-gc-4dk9c deletion completed in 6.266460811s

• [SLOW TEST:19.536 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:45:18.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Aug 27 18:45:18.984: INFO: Waiting up to 5m0s for pod "var-expansion-747085ee-e895-11ea-b58c-0242ac11000b" in namespace "e2e-tests-var-expansion-7qkw9" to be "success or failure"
Aug 27 18:45:18.998: INFO: Pod "var-expansion-747085ee-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.686376ms
Aug 27 18:45:21.002: INFO: Pod "var-expansion-747085ee-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017916942s
Aug 27 18:45:23.006: INFO: Pod "var-expansion-747085ee-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021745722s
Aug 27 18:45:25.009: INFO: Pod "var-expansion-747085ee-e895-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025114719s
STEP: Saw pod success
Aug 27 18:45:25.009: INFO: Pod "var-expansion-747085ee-e895-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:45:25.012: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-747085ee-e895-11ea-b58c-0242ac11000b container dapi-container: 
STEP: delete the pod
Aug 27 18:45:25.086: INFO: Waiting for pod var-expansion-747085ee-e895-11ea-b58c-0242ac11000b to disappear
Aug 27 18:45:25.094: INFO: Pod var-expansion-747085ee-e895-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:45:25.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-7qkw9" for this suite.
Aug 27 18:45:31.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:45:31.122: INFO: namespace: e2e-tests-var-expansion-7qkw9, resource: bindings, ignored listing per whitelist
Aug 27 18:45:31.181: INFO: namespace e2e-tests-var-expansion-7qkw9 deletion completed in 6.083975048s

• [SLOW TEST:12.304 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:45:31.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:45:35.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-sdq9p" for this suite.
Aug 27 18:46:21.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:46:21.439: INFO: namespace: e2e-tests-kubelet-test-sdq9p, resource: bindings, ignored listing per whitelist
Aug 27 18:46:21.455: INFO: namespace e2e-tests-kubelet-test-sdq9p deletion completed in 46.093202927s

• [SLOW TEST:50.274 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:46:21.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-99c8fcd2-e895-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 18:46:21.632: INFO: Waiting up to 5m0s for pod "pod-secrets-99c97a23-e895-11ea-b58c-0242ac11000b" in namespace "e2e-tests-secrets-wrzsx" to be "success or failure"
Aug 27 18:46:21.635: INFO: Pod "pod-secrets-99c97a23-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393032ms
Aug 27 18:46:23.639: INFO: Pod "pod-secrets-99c97a23-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006820957s
Aug 27 18:46:25.643: INFO: Pod "pod-secrets-99c97a23-e895-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.01042413s
Aug 27 18:46:27.647: INFO: Pod "pod-secrets-99c97a23-e895-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014495075s
STEP: Saw pod success
Aug 27 18:46:27.647: INFO: Pod "pod-secrets-99c97a23-e895-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:46:27.650: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-99c97a23-e895-11ea-b58c-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Aug 27 18:46:27.841: INFO: Waiting for pod pod-secrets-99c97a23-e895-11ea-b58c-0242ac11000b to disappear
Aug 27 18:46:27.850: INFO: Pod pod-secrets-99c97a23-e895-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:46:27.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-wrzsx" for this suite.
Aug 27 18:46:37.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:46:37.933: INFO: namespace: e2e-tests-secrets-wrzsx, resource: bindings, ignored listing per whitelist
Aug 27 18:46:37.948: INFO: namespace e2e-tests-secrets-wrzsx deletion completed in 10.09512378s

• [SLOW TEST:16.493 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:46:37.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 27 18:46:53.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:46:53.750: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:46:55.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:46:55.754: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:46:57.751: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:46:57.913: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:46:59.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:46:59.753: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:47:01.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:47:01.755: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:47:03.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:47:03.754: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:47:05.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:47:05.753: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:47:07.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:47:07.753: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:47:09.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:47:09.798: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:47:11.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:47:11.754: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:47:13.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:47:13.754: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:47:15.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:47:15.754: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:47:17.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:47:17.942: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 18:47:19.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 18:47:19.755: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:47:19.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7qnlm" for this suite.
Aug 27 18:47:44.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:47:44.065: INFO: namespace: e2e-tests-container-lifecycle-hook-7qnlm, resource: bindings, ignored listing per whitelist
Aug 27 18:47:44.103: INFO: namespace e2e-tests-container-lifecycle-hook-7qnlm deletion completed in 24.344541367s

• [SLOW TEST:66.155 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:47:44.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-cb0f9116-e895-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 18:47:44.309: INFO: Waiting up to 5m0s for pod "pod-secrets-cb120d83-e895-11ea-b58c-0242ac11000b" in namespace "e2e-tests-secrets-gmfmz" to be "success or failure"
Aug 27 18:47:44.313: INFO: Pod "pod-secrets-cb120d83-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.6661ms
Aug 27 18:47:46.409: INFO: Pod "pod-secrets-cb120d83-e895-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09992807s
Aug 27 18:47:48.413: INFO: Pod "pod-secrets-cb120d83-e895-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104158602s
STEP: Saw pod success
Aug 27 18:47:48.413: INFO: Pod "pod-secrets-cb120d83-e895-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:47:48.416: INFO: Trying to get logs from node hunter-worker pod pod-secrets-cb120d83-e895-11ea-b58c-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Aug 27 18:47:48.692: INFO: Waiting for pod pod-secrets-cb120d83-e895-11ea-b58c-0242ac11000b to disappear
Aug 27 18:47:48.726: INFO: Pod pod-secrets-cb120d83-e895-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:47:48.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-gmfmz" for this suite.
Aug 27 18:47:54.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:47:54.780: INFO: namespace: e2e-tests-secrets-gmfmz, resource: bindings, ignored listing per whitelist
Aug 27 18:47:54.825: INFO: namespace e2e-tests-secrets-gmfmz deletion completed in 6.094441942s

• [SLOW TEST:10.721 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:47:54.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Aug 27 18:47:54.960: INFO: PodSpec: initContainers in spec.initContainers
Aug 27 18:48:45.605: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d16d6bfd-e895-11ea-b58c-0242ac11000b", GenerateName:"", Namespace:"e2e-tests-init-container-zmpz2", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-zmpz2/pods/pod-init-d16d6bfd-e895-11ea-b58c-0242ac11000b", UID:"d17023ac-e895-11ea-a485-0242ac120004", ResourceVersion:"2701587", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734150874, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"960183979", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4v549", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001602080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4v549", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4v549", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4v549", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022b2088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0014f7c80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022b2110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022b2130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022b2138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022b213c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734150875, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734150875, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734150875, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734150874, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.1.232", StartTime:(*v1.Time)(0xc0021de060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016a4310)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016a4380)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://ea4f1cc4531fcaee3db85854af5c534859ab1b7628ef5774c4d7da66ac92a5fc"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0021de0c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0021de080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:48:45.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-zmpz2" for this suite.
Aug 27 18:49:07.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:49:07.692: INFO: namespace: e2e-tests-init-container-zmpz2, resource: bindings, ignored listing per whitelist
Aug 27 18:49:07.740: INFO: namespace e2e-tests-init-container-zmpz2 deletion completed in 22.094308859s

• [SLOW TEST:72.916 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:49:07.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:49:08.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-95lqz" for this suite.
Aug 27 18:49:16.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:49:16.282: INFO: namespace: e2e-tests-kubelet-test-95lqz, resource: bindings, ignored listing per whitelist
Aug 27 18:49:16.316: INFO: namespace e2e-tests-kubelet-test-95lqz deletion completed in 8.091968888s

• [SLOW TEST:8.575 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:49:16.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-lzztj
I0827 18:49:16.413533       6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-lzztj, replica count: 1
I0827 18:49:17.463936       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 18:49:18.464123       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 18:49:19.464345       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 18:49:20.464573       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 18:49:21.464959       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 18:49:22.465154       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 27 18:49:22.596: INFO: Created: latency-svc-nx5hc
Aug 27 18:49:22.613: INFO: Got endpoints: latency-svc-nx5hc [47.706511ms]
Aug 27 18:49:22.645: INFO: Created: latency-svc-knhb4
Aug 27 18:49:22.720: INFO: Got endpoints: latency-svc-knhb4 [107.786743ms]
Aug 27 18:49:22.883: INFO: Created: latency-svc-857vx
Aug 27 18:49:22.940: INFO: Got endpoints: latency-svc-857vx [326.92761ms]
Aug 27 18:49:22.973: INFO: Created: latency-svc-l64x2
Aug 27 18:49:23.014: INFO: Got endpoints: latency-svc-l64x2 [401.600868ms]
Aug 27 18:49:23.030: INFO: Created: latency-svc-5sqd8
Aug 27 18:49:23.098: INFO: Got endpoints: latency-svc-5sqd8 [485.284057ms]
Aug 27 18:49:23.151: INFO: Created: latency-svc-vprhj
Aug 27 18:49:23.174: INFO: Got endpoints: latency-svc-vprhj [561.128312ms]
Aug 27 18:49:23.210: INFO: Created: latency-svc-frjvc
Aug 27 18:49:23.228: INFO: Got endpoints: latency-svc-frjvc [615.362402ms]
Aug 27 18:49:23.302: INFO: Created: latency-svc-lk82x
Aug 27 18:49:23.306: INFO: Got endpoints: latency-svc-lk82x [693.133901ms]
Aug 27 18:49:23.369: INFO: Created: latency-svc-lhmgc
Aug 27 18:49:23.379: INFO: Got endpoints: latency-svc-lhmgc [766.218425ms]
Aug 27 18:49:23.458: INFO: Created: latency-svc-xdv89
Aug 27 18:49:23.462: INFO: Got endpoints: latency-svc-xdv89 [849.623299ms]
Aug 27 18:49:23.504: INFO: Created: latency-svc-fk4dv
Aug 27 18:49:23.534: INFO: Got endpoints: latency-svc-fk4dv [921.00731ms]
Aug 27 18:49:23.601: INFO: Created: latency-svc-xlpv7
Aug 27 18:49:23.614: INFO: Got endpoints: latency-svc-xlpv7 [1.00123514s]
Aug 27 18:49:23.639: INFO: Created: latency-svc-gs9hm
Aug 27 18:49:23.653: INFO: Got endpoints: latency-svc-gs9hm [1.040069907s]
Aug 27 18:49:23.678: INFO: Created: latency-svc-6bpm9
Aug 27 18:49:23.756: INFO: Got endpoints: latency-svc-6bpm9 [1.143717649s]
Aug 27 18:49:23.786: INFO: Created: latency-svc-dhlhv
Aug 27 18:49:23.802: INFO: Got endpoints: latency-svc-dhlhv [1.189237187s]
Aug 27 18:49:23.825: INFO: Created: latency-svc-c7p87
Aug 27 18:49:23.843: INFO: Got endpoints: latency-svc-c7p87 [1.230715058s]
Aug 27 18:49:23.897: INFO: Created: latency-svc-bz72x
Aug 27 18:49:23.916: INFO: Got endpoints: latency-svc-bz72x [1.194998276s]
Aug 27 18:49:23.949: INFO: Created: latency-svc-llczv
Aug 27 18:49:23.964: INFO: Got endpoints: latency-svc-llczv [1.024502233s]
Aug 27 18:49:23.984: INFO: Created: latency-svc-hfqgq
Aug 27 18:49:24.661: INFO: Got endpoints: latency-svc-hfqgq [1.646577128s]
Aug 27 18:49:24.664: INFO: Created: latency-svc-5wkcf
Aug 27 18:49:24.949: INFO: Got endpoints: latency-svc-5wkcf [1.850603101s]
Aug 27 18:49:25.233: INFO: Created: latency-svc-gr7ql
Aug 27 18:49:25.254: INFO: Got endpoints: latency-svc-gr7ql [2.079826049s]
Aug 27 18:49:25.973: INFO: Created: latency-svc-wxtxn
Aug 27 18:49:25.976: INFO: Got endpoints: latency-svc-wxtxn [2.747702738s]
Aug 27 18:49:26.326: INFO: Created: latency-svc-g7mzx
Aug 27 18:49:26.644: INFO: Got endpoints: latency-svc-g7mzx [3.338156209s]
Aug 27 18:49:26.938: INFO: Created: latency-svc-q2f7d
Aug 27 18:49:26.961: INFO: Got endpoints: latency-svc-q2f7d [3.581819365s]
Aug 27 18:49:27.289: INFO: Created: latency-svc-5dfpr
Aug 27 18:49:28.368: INFO: Got endpoints: latency-svc-5dfpr [4.905212652s]
Aug 27 18:49:28.836: INFO: Created: latency-svc-nvq2q
Aug 27 18:49:29.386: INFO: Got endpoints: latency-svc-nvq2q [5.851810418s]
Aug 27 18:49:29.389: INFO: Created: latency-svc-lqcpn
Aug 27 18:49:29.393: INFO: Got endpoints: latency-svc-lqcpn [5.778741411s]
Aug 27 18:49:29.650: INFO: Created: latency-svc-pcmlb
Aug 27 18:49:29.939: INFO: Got endpoints: latency-svc-pcmlb [6.285679136s]
Aug 27 18:49:30.213: INFO: Created: latency-svc-4bph4
Aug 27 18:49:30.269: INFO: Got endpoints: latency-svc-4bph4 [6.512727526s]
Aug 27 18:49:30.443: INFO: Created: latency-svc-tsgm4
Aug 27 18:49:30.468: INFO: Got endpoints: latency-svc-tsgm4 [6.665527563s]
Aug 27 18:49:30.703: INFO: Created: latency-svc-x59p9
Aug 27 18:49:30.707: INFO: Got endpoints: latency-svc-x59p9 [6.863678655s]
Aug 27 18:49:30.888: INFO: Created: latency-svc-bqd9s
Aug 27 18:49:30.956: INFO: Got endpoints: latency-svc-bqd9s [7.04090526s]
Aug 27 18:49:31.039: INFO: Created: latency-svc-hq6q9
Aug 27 18:49:31.068: INFO: Got endpoints: latency-svc-hq6q9 [7.102831118s]
Aug 27 18:49:31.110: INFO: Created: latency-svc-ms7kg
Aug 27 18:49:31.137: INFO: Got endpoints: latency-svc-ms7kg [6.475756901s]
Aug 27 18:49:31.191: INFO: Created: latency-svc-8sdxj
Aug 27 18:49:31.226: INFO: Got endpoints: latency-svc-8sdxj [6.277215748s]
Aug 27 18:49:31.261: INFO: Created: latency-svc-jnx8m
Aug 27 18:49:31.350: INFO: Got endpoints: latency-svc-jnx8m [6.096082471s]
Aug 27 18:49:31.363: INFO: Created: latency-svc-m9w4d
Aug 27 18:49:31.404: INFO: Got endpoints: latency-svc-m9w4d [5.427705427s]
Aug 27 18:49:31.543: INFO: Created: latency-svc-nrnpb
Aug 27 18:49:31.564: INFO: Got endpoints: latency-svc-nrnpb [4.91969349s]
Aug 27 18:49:31.623: INFO: Created: latency-svc-rbjkt
Aug 27 18:49:31.697: INFO: Got endpoints: latency-svc-rbjkt [4.735778598s]
Aug 27 18:49:31.734: INFO: Created: latency-svc-7gjt8
Aug 27 18:49:31.750: INFO: Got endpoints: latency-svc-7gjt8 [3.382164727s]
Aug 27 18:49:31.782: INFO: Created: latency-svc-m7ph5
Aug 27 18:49:31.846: INFO: Got endpoints: latency-svc-m7ph5 [2.460608154s]
Aug 27 18:49:31.863: INFO: Created: latency-svc-t5gfx
Aug 27 18:49:31.920: INFO: Got endpoints: latency-svc-t5gfx [2.526687416s]
Aug 27 18:49:32.026: INFO: Created: latency-svc-pf9p4
Aug 27 18:49:32.032: INFO: Got endpoints: latency-svc-pf9p4 [2.093102263s]
Aug 27 18:49:32.090: INFO: Created: latency-svc-x4htg
Aug 27 18:49:32.105: INFO: Got endpoints: latency-svc-x4htg [1.83562322s]
Aug 27 18:49:32.164: INFO: Created: latency-svc-rvx77
Aug 27 18:49:32.190: INFO: Got endpoints: latency-svc-rvx77 [1.721914625s]
Aug 27 18:49:32.214: INFO: Created: latency-svc-6pw72
Aug 27 18:49:32.232: INFO: Got endpoints: latency-svc-6pw72 [1.524742025s]
Aug 27 18:49:32.253: INFO: Created: latency-svc-gt6f4
Aug 27 18:49:32.314: INFO: Got endpoints: latency-svc-gt6f4 [1.357130998s]
Aug 27 18:49:32.337: INFO: Created: latency-svc-k9btr
Aug 27 18:49:32.357: INFO: Got endpoints: latency-svc-k9btr [1.289251792s]
Aug 27 18:49:32.382: INFO: Created: latency-svc-vvtqx
Aug 27 18:49:32.401: INFO: Got endpoints: latency-svc-vvtqx [1.263969701s]
Aug 27 18:49:32.488: INFO: Created: latency-svc-2lng8
Aug 27 18:49:32.491: INFO: Got endpoints: latency-svc-2lng8 [1.264402498s]
Aug 27 18:49:32.523: INFO: Created: latency-svc-r7682
Aug 27 18:49:32.565: INFO: Got endpoints: latency-svc-r7682 [1.215077468s]
Aug 27 18:49:32.633: INFO: Created: latency-svc-s4qsb
Aug 27 18:49:32.670: INFO: Created: latency-svc-bjwlz
Aug 27 18:49:32.693: INFO: Got endpoints: latency-svc-s4qsb [1.288565302s]
Aug 27 18:49:32.781: INFO: Created: latency-svc-vxmsb
Aug 27 18:49:32.781: INFO: Got endpoints: latency-svc-bjwlz [1.216773819s]
Aug 27 18:49:32.810: INFO: Got endpoints: latency-svc-vxmsb [1.113573725s]
Aug 27 18:49:32.844: INFO: Created: latency-svc-xvxgk
Aug 27 18:49:32.859: INFO: Got endpoints: latency-svc-xvxgk [1.108624631s]
Aug 27 18:49:32.973: INFO: Created: latency-svc-hbzgv
Aug 27 18:49:32.986: INFO: Got endpoints: latency-svc-hbzgv [1.139567503s]
Aug 27 18:49:33.357: INFO: Created: latency-svc-wp6ns
Aug 27 18:49:33.410: INFO: Got endpoints: latency-svc-wp6ns [1.490592155s]
Aug 27 18:49:33.447: INFO: Created: latency-svc-rbc2f
Aug 27 18:49:33.511: INFO: Got endpoints: latency-svc-rbc2f [1.47931324s]
Aug 27 18:49:33.810: INFO: Created: latency-svc-vzqb2
Aug 27 18:49:33.948: INFO: Got endpoints: latency-svc-vzqb2 [1.843381526s]
Aug 27 18:49:34.236: INFO: Created: latency-svc-r85lg
Aug 27 18:49:34.242: INFO: Got endpoints: latency-svc-r85lg [2.051912348s]
Aug 27 18:49:34.587: INFO: Created: latency-svc-cvjpx
Aug 27 18:49:34.715: INFO: Got endpoints: latency-svc-cvjpx [2.482552134s]
Aug 27 18:49:34.796: INFO: Created: latency-svc-5z66f
Aug 27 18:49:34.903: INFO: Got endpoints: latency-svc-5z66f [2.589213621s]
Aug 27 18:49:34.964: INFO: Created: latency-svc-xv4mn
Aug 27 18:49:35.056: INFO: Got endpoints: latency-svc-xv4mn [2.69850813s]
Aug 27 18:49:35.087: INFO: Created: latency-svc-rqvl6
Aug 27 18:49:35.119: INFO: Got endpoints: latency-svc-rqvl6 [2.718470407s]
Aug 27 18:49:35.218: INFO: Created: latency-svc-xwfgf
Aug 27 18:49:35.222: INFO: Got endpoints: latency-svc-xwfgf [2.73106215s]
Aug 27 18:49:35.271: INFO: Created: latency-svc-sqhsr
Aug 27 18:49:35.288: INFO: Got endpoints: latency-svc-sqhsr [2.72244384s]
Aug 27 18:49:35.310: INFO: Created: latency-svc-pvckt
Aug 27 18:49:35.379: INFO: Got endpoints: latency-svc-pvckt [2.686671989s]
Aug 27 18:49:35.381: INFO: Created: latency-svc-jqzbt
Aug 27 18:49:35.421: INFO: Got endpoints: latency-svc-jqzbt [2.639572217s]
Aug 27 18:49:35.610: INFO: Created: latency-svc-6rmx8
Aug 27 18:49:35.793: INFO: Got endpoints: latency-svc-6rmx8 [2.982425659s]
Aug 27 18:49:35.837: INFO: Created: latency-svc-65b6m
Aug 27 18:49:36.058: INFO: Got endpoints: latency-svc-65b6m [3.199284274s]
Aug 27 18:49:36.075: INFO: Created: latency-svc-6glrx
Aug 27 18:49:36.111: INFO: Got endpoints: latency-svc-6glrx [3.124509123s]
Aug 27 18:49:36.323: INFO: Created: latency-svc-p54vt
Aug 27 18:49:36.446: INFO: Got endpoints: latency-svc-p54vt [3.035256303s]
Aug 27 18:49:36.472: INFO: Created: latency-svc-522g4
Aug 27 18:49:36.616: INFO: Got endpoints: latency-svc-522g4 [3.104547337s]
Aug 27 18:49:36.637: INFO: Created: latency-svc-q7n8w
Aug 27 18:49:36.669: INFO: Got endpoints: latency-svc-q7n8w [2.720307111s]
Aug 27 18:49:36.859: INFO: Created: latency-svc-xnz6v
Aug 27 18:49:36.945: INFO: Got endpoints: latency-svc-xnz6v [2.702767828s]
Aug 27 18:49:37.038: INFO: Created: latency-svc-7sjws
Aug 27 18:49:37.086: INFO: Got endpoints: latency-svc-7sjws [2.37089163s]
Aug 27 18:49:37.145: INFO: Created: latency-svc-dxck7
Aug 27 18:49:37.194: INFO: Got endpoints: latency-svc-dxck7 [2.290773679s]
Aug 27 18:49:37.216: INFO: Created: latency-svc-jzg8b
Aug 27 18:49:37.254: INFO: Got endpoints: latency-svc-jzg8b [2.197453649s]
Aug 27 18:49:37.380: INFO: Created: latency-svc-vcmh8
Aug 27 18:49:37.382: INFO: Got endpoints: latency-svc-vcmh8 [2.262664453s]
Aug 27 18:49:37.458: INFO: Created: latency-svc-q7tvw
Aug 27 18:49:37.571: INFO: Got endpoints: latency-svc-q7tvw [2.349571305s]
Aug 27 18:49:37.605: INFO: Created: latency-svc-vtdrx
Aug 27 18:49:37.646: INFO: Got endpoints: latency-svc-vtdrx [2.358300608s]
Aug 27 18:49:37.860: INFO: Created: latency-svc-9fsm8
Aug 27 18:49:38.020: INFO: Got endpoints: latency-svc-9fsm8 [2.64095332s]
Aug 27 18:49:38.219: INFO: Created: latency-svc-pc4jb
Aug 27 18:49:38.262: INFO: Got endpoints: latency-svc-pc4jb [2.841293845s]
Aug 27 18:49:38.448: INFO: Created: latency-svc-qwsrc
Aug 27 18:49:38.520: INFO: Got endpoints: latency-svc-qwsrc [2.727053144s]
Aug 27 18:49:38.674: INFO: Created: latency-svc-l56wd
Aug 27 18:49:38.699: INFO: Got endpoints: latency-svc-l56wd [2.640997946s]
Aug 27 18:49:38.914: INFO: Created: latency-svc-ltl8f
Aug 27 18:49:38.945: INFO: Got endpoints: latency-svc-ltl8f [2.834158547s]
Aug 27 18:49:40.101: INFO: Created: latency-svc-hs54w
Aug 27 18:49:40.676: INFO: Got endpoints: latency-svc-hs54w [4.230498807s]
Aug 27 18:49:41.229: INFO: Created: latency-svc-94nr7
Aug 27 18:49:41.458: INFO: Got endpoints: latency-svc-94nr7 [4.841749743s]
Aug 27 18:49:42.063: INFO: Created: latency-svc-sh5wb
Aug 27 18:49:42.072: INFO: Got endpoints: latency-svc-sh5wb [5.402852115s]
Aug 27 18:49:42.539: INFO: Created: latency-svc-nvk97
Aug 27 18:49:42.539: INFO: Got endpoints: latency-svc-nvk97 [5.594714613s]
Aug 27 18:49:42.935: INFO: Created: latency-svc-77ctz
Aug 27 18:49:43.254: INFO: Created: latency-svc-x9tst
Aug 27 18:49:43.337: INFO: Got endpoints: latency-svc-77ctz [6.251198541s]
Aug 27 18:49:43.337: INFO: Got endpoints: latency-svc-x9tst [6.142894084s]
Aug 27 18:49:43.483: INFO: Created: latency-svc-wlkmr
Aug 27 18:49:43.733: INFO: Got endpoints: latency-svc-wlkmr [6.479647195s]
Aug 27 18:49:43.782: INFO: Created: latency-svc-9259x
Aug 27 18:49:44.057: INFO: Got endpoints: latency-svc-9259x [6.674687593s]
Aug 27 18:49:44.060: INFO: Created: latency-svc-p2kpc
Aug 27 18:49:44.269: INFO: Got endpoints: latency-svc-p2kpc [6.697202864s]
Aug 27 18:49:44.321: INFO: Created: latency-svc-zdjlk
Aug 27 18:49:44.343: INFO: Got endpoints: latency-svc-zdjlk [6.697277317s]
Aug 27 18:49:44.410: INFO: Created: latency-svc-kdvv4
Aug 27 18:49:44.430: INFO: Got endpoints: latency-svc-kdvv4 [6.409478808s]
Aug 27 18:49:44.466: INFO: Created: latency-svc-qxhfd
Aug 27 18:49:44.482: INFO: Got endpoints: latency-svc-qxhfd [6.220206595s]
Aug 27 18:49:44.578: INFO: Created: latency-svc-z58ks
Aug 27 18:49:44.590: INFO: Got endpoints: latency-svc-z58ks [6.070275806s]
Aug 27 18:49:44.631: INFO: Created: latency-svc-frc7m
Aug 27 18:49:44.670: INFO: Got endpoints: latency-svc-frc7m [5.970476845s]
Aug 27 18:49:44.748: INFO: Created: latency-svc-4tv85
Aug 27 18:49:44.765: INFO: Got endpoints: latency-svc-4tv85 [5.820135813s]
Aug 27 18:49:44.793: INFO: Created: latency-svc-cb8bx
Aug 27 18:49:44.820: INFO: Got endpoints: latency-svc-cb8bx [4.143601608s]
Aug 27 18:49:44.895: INFO: Created: latency-svc-mdln7
Aug 27 18:49:44.898: INFO: Got endpoints: latency-svc-mdln7 [3.439907029s]
Aug 27 18:49:44.976: INFO: Created: latency-svc-hdgqt
Aug 27 18:49:44.994: INFO: Got endpoints: latency-svc-hdgqt [2.922136119s]
Aug 27 18:49:45.081: INFO: Created: latency-svc-lkb9h
Aug 27 18:49:45.125: INFO: Got endpoints: latency-svc-lkb9h [2.586014831s]
Aug 27 18:49:45.168: INFO: Created: latency-svc-m4lbq
Aug 27 18:49:45.230: INFO: Got endpoints: latency-svc-m4lbq [1.893046578s]
Aug 27 18:49:45.246: INFO: Created: latency-svc-kxc4j
Aug 27 18:49:45.265: INFO: Got endpoints: latency-svc-kxc4j [1.928302357s]
Aug 27 18:49:45.285: INFO: Created: latency-svc-hb28p
Aug 27 18:49:45.304: INFO: Got endpoints: latency-svc-hb28p [1.571139383s]
Aug 27 18:49:45.386: INFO: Created: latency-svc-q9kbr
Aug 27 18:49:45.410: INFO: Got endpoints: latency-svc-q9kbr [1.353031156s]
Aug 27 18:49:45.438: INFO: Created: latency-svc-5wz6h
Aug 27 18:49:45.452: INFO: Got endpoints: latency-svc-5wz6h [1.183544813s]
Aug 27 18:49:45.486: INFO: Created: latency-svc-9wjn2
Aug 27 18:49:45.571: INFO: Got endpoints: latency-svc-9wjn2 [1.227671068s]
Aug 27 18:49:45.597: INFO: Created: latency-svc-kcx4w
Aug 27 18:49:45.610: INFO: Got endpoints: latency-svc-kcx4w [1.179855285s]
Aug 27 18:49:45.654: INFO: Created: latency-svc-jh9m7
Aug 27 18:49:45.663: INFO: Got endpoints: latency-svc-jh9m7 [1.181280246s]
Aug 27 18:49:45.715: INFO: Created: latency-svc-596dv
Aug 27 18:49:45.724: INFO: Got endpoints: latency-svc-596dv [1.133130214s]
Aug 27 18:49:46.071: INFO: Created: latency-svc-g8ck8
Aug 27 18:49:46.260: INFO: Got endpoints: latency-svc-g8ck8 [1.590695101s]
Aug 27 18:49:46.422: INFO: Created: latency-svc-p8b2z
Aug 27 18:49:46.882: INFO: Got endpoints: latency-svc-p8b2z [2.116453559s]
Aug 27 18:49:46.883: INFO: Created: latency-svc-rsstg
Aug 27 18:49:47.213: INFO: Got endpoints: latency-svc-rsstg [2.392472746s]
Aug 27 18:49:47.224: INFO: Created: latency-svc-ljwdn
Aug 27 18:49:47.433: INFO: Got endpoints: latency-svc-ljwdn [2.535706125s]
Aug 27 18:49:47.680: INFO: Created: latency-svc-fxr86
Aug 27 18:49:47.775: INFO: Got endpoints: latency-svc-fxr86 [2.78092303s]
Aug 27 18:49:47.983: INFO: Created: latency-svc-dtr5j
Aug 27 18:49:47.998: INFO: Got endpoints: latency-svc-dtr5j [2.872830657s]
Aug 27 18:49:48.071: INFO: Created: latency-svc-2c9mb
Aug 27 18:49:48.129: INFO: Got endpoints: latency-svc-2c9mb [2.898589752s]
Aug 27 18:49:48.327: INFO: Created: latency-svc-sjmmp
Aug 27 18:49:48.381: INFO: Got endpoints: latency-svc-sjmmp [3.115456506s]
Aug 27 18:49:48.568: INFO: Created: latency-svc-rhr4v
Aug 27 18:49:48.727: INFO: Got endpoints: latency-svc-rhr4v [3.422353977s]
Aug 27 18:49:48.765: INFO: Created: latency-svc-jgmbv
Aug 27 18:49:48.777: INFO: Got endpoints: latency-svc-jgmbv [3.366726967s]
Aug 27 18:49:48.803: INFO: Created: latency-svc-7547z
Aug 27 18:49:48.813: INFO: Got endpoints: latency-svc-7547z [3.361014766s]
Aug 27 18:49:48.877: INFO: Created: latency-svc-95qxl
Aug 27 18:49:48.885: INFO: Got endpoints: latency-svc-95qxl [3.314290284s]
Aug 27 18:49:48.936: INFO: Created: latency-svc-5lng9
Aug 27 18:49:48.946: INFO: Got endpoints: latency-svc-5lng9 [3.335714285s]
Aug 27 18:49:49.030: INFO: Created: latency-svc-x46pb
Aug 27 18:49:49.080: INFO: Created: latency-svc-5xvkk
Aug 27 18:49:49.080: INFO: Got endpoints: latency-svc-x46pb [3.416215855s]
Aug 27 18:49:49.090: INFO: Got endpoints: latency-svc-5xvkk [3.36681213s]
Aug 27 18:49:49.116: INFO: Created: latency-svc-lwzmc
Aug 27 18:49:49.176: INFO: Got endpoints: latency-svc-lwzmc [2.915514637s]
Aug 27 18:49:49.191: INFO: Created: latency-svc-fwgjg
Aug 27 18:49:49.230: INFO: Got endpoints: latency-svc-fwgjg [2.348017205s]
Aug 27 18:49:49.258: INFO: Created: latency-svc-pwkwk
Aug 27 18:49:49.272: INFO: Got endpoints: latency-svc-pwkwk [2.059038219s]
Aug 27 18:49:49.368: INFO: Created: latency-svc-gjms5
Aug 27 18:49:49.387: INFO: Got endpoints: latency-svc-gjms5 [1.953074789s]
Aug 27 18:49:49.482: INFO: Created: latency-svc-rwkkz
Aug 27 18:49:49.494: INFO: Got endpoints: latency-svc-rwkkz [1.718970384s]
Aug 27 18:49:49.536: INFO: Created: latency-svc-sj69x
Aug 27 18:49:49.555: INFO: Got endpoints: latency-svc-sj69x [1.556099835s]
Aug 27 18:49:49.578: INFO: Created: latency-svc-2br5w
Aug 27 18:49:49.637: INFO: Got endpoints: latency-svc-2br5w [142.757507ms]
Aug 27 18:49:49.647: INFO: Created: latency-svc-qhrd2
Aug 27 18:49:49.670: INFO: Got endpoints: latency-svc-qhrd2 [1.541612367s]
Aug 27 18:49:49.695: INFO: Created: latency-svc-lmrmb
Aug 27 18:49:49.706: INFO: Got endpoints: latency-svc-lmrmb [1.324958832s]
Aug 27 18:49:49.734: INFO: Created: latency-svc-jz92j
Aug 27 18:49:49.793: INFO: Got endpoints: latency-svc-jz92j [1.066051893s]
Aug 27 18:49:49.803: INFO: Created: latency-svc-9f959
Aug 27 18:49:49.821: INFO: Got endpoints: latency-svc-9f959 [1.043873363s]
Aug 27 18:49:49.888: INFO: Created: latency-svc-nl6zf
Aug 27 18:49:49.942: INFO: Got endpoints: latency-svc-nl6zf [1.129222176s]
Aug 27 18:49:49.974: INFO: Created: latency-svc-qzfc5
Aug 27 18:49:49.990: INFO: Got endpoints: latency-svc-qzfc5 [1.104564415s]
Aug 27 18:49:50.115: INFO: Created: latency-svc-pq7mj
Aug 27 18:49:50.119: INFO: Got endpoints: latency-svc-pq7mj [1.173372837s]
Aug 27 18:49:50.163: INFO: Created: latency-svc-94xph
Aug 27 18:49:50.266: INFO: Got endpoints: latency-svc-94xph [1.186304856s]
Aug 27 18:49:50.279: INFO: Created: latency-svc-psx2d
Aug 27 18:49:50.294: INFO: Got endpoints: latency-svc-psx2d [1.203110968s]
Aug 27 18:49:50.347: INFO: Created: latency-svc-l86jn
Aug 27 18:49:50.360: INFO: Got endpoints: latency-svc-l86jn [1.184112855s]
Aug 27 18:49:50.424: INFO: Created: latency-svc-nlj96
Aug 27 18:49:50.426: INFO: Got endpoints: latency-svc-nlj96 [1.196470613s]
Aug 27 18:49:50.457: INFO: Created: latency-svc-tz5jl
Aug 27 18:49:50.475: INFO: Got endpoints: latency-svc-tz5jl [1.202690287s]
Aug 27 18:49:50.499: INFO: Created: latency-svc-btldh
Aug 27 18:49:50.517: INFO: Got endpoints: latency-svc-btldh [1.130405304s]
Aug 27 18:49:50.595: INFO: Created: latency-svc-bb252
Aug 27 18:49:50.601: INFO: Got endpoints: latency-svc-bb252 [1.046718864s]
Aug 27 18:49:50.667: INFO: Created: latency-svc-xmlp2
Aug 27 18:49:50.751: INFO: Got endpoints: latency-svc-xmlp2 [1.114028796s]
Aug 27 18:49:50.765: INFO: Created: latency-svc-f7vkj
Aug 27 18:49:50.782: INFO: Got endpoints: latency-svc-f7vkj [1.112072865s]
Aug 27 18:49:50.809: INFO: Created: latency-svc-qkpqv
Aug 27 18:49:50.819: INFO: Got endpoints: latency-svc-qkpqv [1.112798241s]
Aug 27 18:49:50.895: INFO: Created: latency-svc-cbxpm
Aug 27 18:49:50.909: INFO: Got endpoints: latency-svc-cbxpm [1.116028372s]
Aug 27 18:49:50.932: INFO: Created: latency-svc-xxlsx
Aug 27 18:49:50.946: INFO: Got endpoints: latency-svc-xxlsx [1.124980865s]
Aug 27 18:49:51.038: INFO: Created: latency-svc-4dm7r
Aug 27 18:49:51.041: INFO: Got endpoints: latency-svc-4dm7r [1.09884065s]
Aug 27 18:49:51.132: INFO: Created: latency-svc-tqdmb
Aug 27 18:49:51.188: INFO: Got endpoints: latency-svc-tqdmb [1.197780872s]
Aug 27 18:49:51.231: INFO: Created: latency-svc-rqdxg
Aug 27 18:49:51.258: INFO: Got endpoints: latency-svc-rqdxg [1.139381412s]
Aug 27 18:49:51.326: INFO: Created: latency-svc-5gkhk
Aug 27 18:49:51.363: INFO: Got endpoints: latency-svc-5gkhk [1.096587606s]
Aug 27 18:49:51.364: INFO: Created: latency-svc-gxlrx
Aug 27 18:49:51.373: INFO: Got endpoints: latency-svc-gxlrx [1.079533524s]
Aug 27 18:49:51.399: INFO: Created: latency-svc-658sc
Aug 27 18:49:51.410: INFO: Got endpoints: latency-svc-658sc [1.049726034s]
Aug 27 18:49:51.554: INFO: Created: latency-svc-n9bc2
Aug 27 18:49:51.579: INFO: Got endpoints: latency-svc-n9bc2 [1.15294117s]
Aug 27 18:49:51.618: INFO: Created: latency-svc-rzhsd
Aug 27 18:49:51.626: INFO: Got endpoints: latency-svc-rzhsd [1.151609553s]
Aug 27 18:49:51.705: INFO: Created: latency-svc-29ljj
Aug 27 18:49:51.707: INFO: Got endpoints: latency-svc-29ljj [1.189950868s]
Aug 27 18:49:51.747: INFO: Created: latency-svc-bn98l
Aug 27 18:49:51.766: INFO: Got endpoints: latency-svc-bn98l [1.164192475s]
Aug 27 18:49:51.790: INFO: Created: latency-svc-krm86
Aug 27 18:49:51.843: INFO: Got endpoints: latency-svc-krm86 [1.091987111s]
Aug 27 18:49:51.881: INFO: Created: latency-svc-4vmql
Aug 27 18:49:51.911: INFO: Got endpoints: latency-svc-4vmql [1.129065655s]
Aug 27 18:49:51.991: INFO: Created: latency-svc-4sgld
Aug 27 18:49:51.994: INFO: Got endpoints: latency-svc-4sgld [1.175267376s]
Aug 27 18:49:52.038: INFO: Created: latency-svc-84zzw
Aug 27 18:49:52.055: INFO: Got endpoints: latency-svc-84zzw [1.14554293s]
Aug 27 18:49:52.077: INFO: Created: latency-svc-kz8qg
Aug 27 18:49:52.135: INFO: Got endpoints: latency-svc-kz8qg [1.189122415s]
Aug 27 18:49:52.151: INFO: Created: latency-svc-k228g
Aug 27 18:49:52.175: INFO: Got endpoints: latency-svc-k228g [1.134097949s]
Aug 27 18:49:52.209: INFO: Created: latency-svc-fpr4n
Aug 27 18:49:52.224: INFO: Got endpoints: latency-svc-fpr4n [1.035722225s]
Aug 27 18:49:52.272: INFO: Created: latency-svc-bsgrf
Aug 27 18:49:52.274: INFO: Got endpoints: latency-svc-bsgrf [1.015765903s]
Aug 27 18:49:52.350: INFO: Created: latency-svc-9qq2r
Aug 27 18:49:52.427: INFO: Got endpoints: latency-svc-9qq2r [1.064602983s]
Aug 27 18:49:52.443: INFO: Created: latency-svc-2mn28
Aug 27 18:49:52.465: INFO: Got endpoints: latency-svc-2mn28 [1.092207449s]
Aug 27 18:49:52.590: INFO: Created: latency-svc-vr8f9
Aug 27 18:49:52.604: INFO: Got endpoints: latency-svc-vr8f9 [1.194226148s]
Aug 27 18:49:52.640: INFO: Created: latency-svc-vqfms
Aug 27 18:49:52.645: INFO: Got endpoints: latency-svc-vqfms [1.066171332s]
Aug 27 18:49:52.667: INFO: Created: latency-svc-7ldst
Aug 27 18:49:52.682: INFO: Got endpoints: latency-svc-7ldst [1.055882342s]
Aug 27 18:49:52.751: INFO: Created: latency-svc-lvwjc
Aug 27 18:49:52.754: INFO: Got endpoints: latency-svc-lvwjc [1.046492689s]
Aug 27 18:49:52.791: INFO: Created: latency-svc-8m24s
Aug 27 18:49:52.815: INFO: Got endpoints: latency-svc-8m24s [1.049250428s]
Aug 27 18:49:52.839: INFO: Created: latency-svc-g6vnv
Aug 27 18:49:52.895: INFO: Got endpoints: latency-svc-g6vnv [1.051549879s]
Aug 27 18:49:52.926: INFO: Created: latency-svc-q7lcp
Aug 27 18:49:52.961: INFO: Got endpoints: latency-svc-q7lcp [1.049928484s]
Aug 27 18:49:53.045: INFO: Created: latency-svc-rwkwr
Aug 27 18:49:53.049: INFO: Got endpoints: latency-svc-rwkwr [1.054760007s]
Aug 27 18:49:53.097: INFO: Created: latency-svc-k7sft
Aug 27 18:49:53.117: INFO: Got endpoints: latency-svc-k7sft [1.06205084s]
Aug 27 18:49:53.139: INFO: Created: latency-svc-kbllj
Aug 27 18:49:53.206: INFO: Got endpoints: latency-svc-kbllj [1.071210734s]
Aug 27 18:49:53.216: INFO: Created: latency-svc-8gmwl
Aug 27 18:49:53.231: INFO: Got endpoints: latency-svc-8gmwl [1.055681048s]
Aug 27 18:49:53.274: INFO: Created: latency-svc-kdh6l
Aug 27 18:49:53.292: INFO: Got endpoints: latency-svc-kdh6l [1.068312074s]
Aug 27 18:49:53.381: INFO: Created: latency-svc-cjxqf
Aug 27 18:49:53.423: INFO: Created: latency-svc-hwfv6
Aug 27 18:49:53.457: INFO: Got endpoints: latency-svc-cjxqf [1.183185759s]
Aug 27 18:49:53.539: INFO: Got endpoints: latency-svc-hwfv6 [1.111161849s]
Aug 27 18:49:53.571: INFO: Created: latency-svc-thk7c
Aug 27 18:49:53.603: INFO: Got endpoints: latency-svc-thk7c [1.137427679s]
Aug 27 18:49:53.703: INFO: Created: latency-svc-5xbqh
Aug 27 18:49:53.739: INFO: Got endpoints: latency-svc-5xbqh [1.134692899s]
Aug 27 18:49:53.740: INFO: Created: latency-svc-w692p
Aug 27 18:49:53.756: INFO: Got endpoints: latency-svc-w692p [1.110150733s]
Aug 27 18:49:53.865: INFO: Created: latency-svc-b5v4k
Aug 27 18:49:53.867: INFO: Got endpoints: latency-svc-b5v4k [1.184950424s]
Aug 27 18:49:53.909: INFO: Created: latency-svc-dkfm8
Aug 27 18:49:53.942: INFO: Got endpoints: latency-svc-dkfm8 [1.188678739s]
Aug 27 18:49:54.040: INFO: Created: latency-svc-qjlk6
Aug 27 18:49:54.063: INFO: Got endpoints: latency-svc-qjlk6 [1.247726136s]
Aug 27 18:49:54.105: INFO: Created: latency-svc-6glqp
Aug 27 18:49:54.159: INFO: Got endpoints: latency-svc-6glqp [1.264246878s]
Aug 27 18:49:54.190: INFO: Created: latency-svc-x8jmk
Aug 27 18:49:54.208: INFO: Got endpoints: latency-svc-x8jmk [1.246205894s]
Aug 27 18:49:54.255: INFO: Created: latency-svc-2jdwc
Aug 27 18:49:54.315: INFO: Got endpoints: latency-svc-2jdwc [1.266514125s]
Aug 27 18:49:54.366: INFO: Created: latency-svc-6bsb6
Aug 27 18:49:54.382: INFO: Got endpoints: latency-svc-6bsb6 [1.265420856s]
Aug 27 18:49:54.452: INFO: Created: latency-svc-bbrv9
Aug 27 18:49:54.457: INFO: Got endpoints: latency-svc-bbrv9 [1.251236525s]
Aug 27 18:49:54.492: INFO: Created: latency-svc-kdnfw
Aug 27 18:49:54.503: INFO: Got endpoints: latency-svc-kdnfw [1.271477952s]
Aug 27 18:49:54.503: INFO: Latencies: [107.786743ms 142.757507ms 326.92761ms 401.600868ms 485.284057ms 561.128312ms 615.362402ms 693.133901ms 766.218425ms 849.623299ms 921.00731ms 1.00123514s 1.015765903s 1.024502233s 1.035722225s 1.040069907s 1.043873363s 1.046492689s 1.046718864s 1.049250428s 1.049726034s 1.049928484s 1.051549879s 1.054760007s 1.055681048s 1.055882342s 1.06205084s 1.064602983s 1.066051893s 1.066171332s 1.068312074s 1.071210734s 1.079533524s 1.091987111s 1.092207449s 1.096587606s 1.09884065s 1.104564415s 1.108624631s 1.110150733s 1.111161849s 1.112072865s 1.112798241s 1.113573725s 1.114028796s 1.116028372s 1.124980865s 1.129065655s 1.129222176s 1.130405304s 1.133130214s 1.134097949s 1.134692899s 1.137427679s 1.139381412s 1.139567503s 1.143717649s 1.14554293s 1.151609553s 1.15294117s 1.164192475s 1.173372837s 1.175267376s 1.179855285s 1.181280246s 1.183185759s 1.183544813s 1.184112855s 1.184950424s 1.186304856s 1.188678739s 1.189122415s 1.189237187s 1.189950868s 1.194226148s 1.194998276s 1.196470613s 1.197780872s 1.202690287s 1.203110968s 1.215077468s 1.216773819s 1.227671068s 1.230715058s 1.246205894s 1.247726136s 1.251236525s 1.263969701s 1.264246878s 1.264402498s 1.265420856s 1.266514125s 1.271477952s 1.288565302s 1.289251792s 1.324958832s 1.353031156s 1.357130998s 1.47931324s 1.490592155s 1.524742025s 1.541612367s 1.556099835s 1.571139383s 1.590695101s 1.646577128s 1.718970384s 1.721914625s 1.83562322s 1.843381526s 1.850603101s 1.893046578s 1.928302357s 1.953074789s 2.051912348s 2.059038219s 2.079826049s 2.093102263s 2.116453559s 2.197453649s 2.262664453s 2.290773679s 2.348017205s 2.349571305s 2.358300608s 2.37089163s 2.392472746s 2.460608154s 2.482552134s 2.526687416s 2.535706125s 2.586014831s 2.589213621s 2.639572217s 2.64095332s 2.640997946s 2.686671989s 2.69850813s 2.702767828s 2.718470407s 2.720307111s 2.72244384s 2.727053144s 2.73106215s 2.747702738s 2.78092303s 2.834158547s 2.841293845s 2.872830657s 2.898589752s 2.915514637s 2.922136119s 2.982425659s 3.035256303s 3.104547337s 3.115456506s 3.124509123s 3.199284274s 3.314290284s 3.335714285s 3.338156209s 3.361014766s 3.366726967s 3.36681213s 3.382164727s 3.416215855s 3.422353977s 3.439907029s 3.581819365s 4.143601608s 4.230498807s 4.735778598s 4.841749743s 4.905212652s 4.91969349s 5.402852115s 5.427705427s 5.594714613s 5.778741411s 5.820135813s 5.851810418s 5.970476845s 6.070275806s 6.096082471s 6.142894084s 6.220206595s 6.251198541s 6.277215748s 6.285679136s 6.409478808s 6.475756901s 6.479647195s 6.512727526s 6.665527563s 6.674687593s 6.697202864s 6.697277317s 6.863678655s 7.04090526s 7.102831118s]
Aug 27 18:49:54.503: INFO: 50 %ile: 1.524742025s
Aug 27 18:49:54.503: INFO: 90 %ile: 5.851810418s
Aug 27 18:49:54.503: INFO: 99 %ile: 7.04090526s
Aug 27 18:49:54.503: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:49:54.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-lzztj" for this suite.
Aug 27 18:50:22.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:50:22.588: INFO: namespace: e2e-tests-svc-latency-lzztj, resource: bindings, ignored listing per whitelist
Aug 27 18:50:22.592: INFO: namespace e2e-tests-svc-latency-lzztj deletion completed in 28.082328315s

• [SLOW TEST:66.275 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:50:22.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-w5dzv
Aug 27 18:50:26.800: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-w5dzv
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 18:50:26.803: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:54:28.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-w5dzv" for this suite.
Aug 27 18:54:34.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:54:34.430: INFO: namespace: e2e-tests-container-probe-w5dzv, resource: bindings, ignored listing per whitelist
Aug 27 18:54:34.439: INFO: namespace e2e-tests-container-probe-w5dzv deletion completed in 6.200800746s

• [SLOW TEST:251.847 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:54:34.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 18:54:34.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf9e3a95-e896-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-rq8hc" to be "success or failure"
Aug 27 18:54:34.664: INFO: Pod "downwardapi-volume-bf9e3a95-e896-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 69.604901ms
Aug 27 18:54:36.668: INFO: Pod "downwardapi-volume-bf9e3a95-e896-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0730592s
Aug 27 18:54:38.672: INFO: Pod "downwardapi-volume-bf9e3a95-e896-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07703657s
STEP: Saw pod success
Aug 27 18:54:38.672: INFO: Pod "downwardapi-volume-bf9e3a95-e896-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:54:38.675: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-bf9e3a95-e896-11ea-b58c-0242ac11000b container client-container: 
STEP: delete the pod
Aug 27 18:54:38.715: INFO: Waiting for pod downwardapi-volume-bf9e3a95-e896-11ea-b58c-0242ac11000b to disappear
Aug 27 18:54:38.742: INFO: Pod downwardapi-volume-bf9e3a95-e896-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:54:38.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rq8hc" for this suite.
Aug 27 18:54:44.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:54:44.830: INFO: namespace: e2e-tests-downward-api-rq8hc, resource: bindings, ignored listing per whitelist
Aug 27 18:54:44.836: INFO: namespace e2e-tests-downward-api-rq8hc deletion completed in 6.090576393s

• [SLOW TEST:10.396 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:54:44.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 27 18:54:44.994: INFO: Waiting up to 5m0s for pod "pod-c5d0f5a1-e896-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-g96f4" to be "success or failure"
Aug 27 18:54:45.054: INFO: Pod "pod-c5d0f5a1-e896-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 59.377695ms
Aug 27 18:54:47.058: INFO: Pod "pod-c5d0f5a1-e896-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06353041s
Aug 27 18:54:49.062: INFO: Pod "pod-c5d0f5a1-e896-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06795303s
STEP: Saw pod success
Aug 27 18:54:49.062: INFO: Pod "pod-c5d0f5a1-e896-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:54:49.066: INFO: Trying to get logs from node hunter-worker2 pod pod-c5d0f5a1-e896-11ea-b58c-0242ac11000b container test-container: 
STEP: delete the pod
Aug 27 18:54:49.083: INFO: Waiting for pod pod-c5d0f5a1-e896-11ea-b58c-0242ac11000b to disappear
Aug 27 18:54:49.088: INFO: Pod pod-c5d0f5a1-e896-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:54:49.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-g96f4" for this suite.
Aug 27 18:54:55.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:54:55.118: INFO: namespace: e2e-tests-emptydir-g96f4, resource: bindings, ignored listing per whitelist
Aug 27 18:54:55.204: INFO: namespace e2e-tests-emptydir-g96f4 deletion completed in 6.112743384s

• [SLOW TEST:10.368 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:54:55.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 27 18:54:55.324: INFO: Waiting up to 5m0s for pod "pod-cbfb0273-e896-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-wqm2h" to be "success or failure"
Aug 27 18:54:55.328: INFO: Pod "pod-cbfb0273-e896-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.958357ms
Aug 27 18:54:57.332: INFO: Pod "pod-cbfb0273-e896-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008537468s
Aug 27 18:54:59.336: INFO: Pod "pod-cbfb0273-e896-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012282508s
STEP: Saw pod success
Aug 27 18:54:59.336: INFO: Pod "pod-cbfb0273-e896-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:54:59.339: INFO: Trying to get logs from node hunter-worker2 pod pod-cbfb0273-e896-11ea-b58c-0242ac11000b container test-container: 
STEP: delete the pod
Aug 27 18:54:59.361: INFO: Waiting for pod pod-cbfb0273-e896-11ea-b58c-0242ac11000b to disappear
Aug 27 18:54:59.431: INFO: Pod pod-cbfb0273-e896-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:54:59.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wqm2h" for this suite.
Aug 27 18:55:05.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:55:05.505: INFO: namespace: e2e-tests-emptydir-wqm2h, resource: bindings, ignored listing per whitelist
Aug 27 18:55:05.544: INFO: namespace e2e-tests-emptydir-wqm2h deletion completed in 6.108570005s

• [SLOW TEST:10.340 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:55:05.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-d23e9a50-e896-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume configMaps
Aug 27 18:55:05.847: INFO: Waiting up to 5m0s for pod "pod-configmaps-d23eeaca-e896-11ea-b58c-0242ac11000b" in namespace "e2e-tests-configmap-ks8x5" to be "success or failure"
Aug 27 18:55:05.907: INFO: Pod "pod-configmaps-d23eeaca-e896-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 59.887986ms
Aug 27 18:55:08.349: INFO: Pod "pod-configmaps-d23eeaca-e896-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.501338114s
Aug 27 18:55:10.352: INFO: Pod "pod-configmaps-d23eeaca-e896-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.504610554s
Aug 27 18:55:12.357: INFO: Pod "pod-configmaps-d23eeaca-e896-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.509317215s
STEP: Saw pod success
Aug 27 18:55:12.357: INFO: Pod "pod-configmaps-d23eeaca-e896-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:55:12.359: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-d23eeaca-e896-11ea-b58c-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Aug 27 18:55:12.528: INFO: Waiting for pod pod-configmaps-d23eeaca-e896-11ea-b58c-0242ac11000b to disappear
Aug 27 18:55:12.616: INFO: Pod pod-configmaps-d23eeaca-e896-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:55:12.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ks8x5" for this suite.
Aug 27 18:55:18.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:55:18.708: INFO: namespace: e2e-tests-configmap-ks8x5, resource: bindings, ignored listing per whitelist
Aug 27 18:55:18.780: INFO: namespace e2e-tests-configmap-ks8x5 deletion completed in 6.158557112s

• [SLOW TEST:13.236 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:55:18.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Aug 27 18:55:18.972: INFO: Waiting up to 5m0s for pod "var-expansion-da1222ae-e896-11ea-b58c-0242ac11000b" in namespace "e2e-tests-var-expansion-2c99x" to be "success or failure"
Aug 27 18:55:18.994: INFO: Pod "var-expansion-da1222ae-e896-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.900148ms
Aug 27 18:55:21.198: INFO: Pod "var-expansion-da1222ae-e896-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22608814s
Aug 27 18:55:23.203: INFO: Pod "var-expansion-da1222ae-e896-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.230629576s
STEP: Saw pod success
Aug 27 18:55:23.203: INFO: Pod "var-expansion-da1222ae-e896-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:55:23.206: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-da1222ae-e896-11ea-b58c-0242ac11000b container dapi-container: 
STEP: delete the pod
Aug 27 18:55:23.235: INFO: Waiting for pod var-expansion-da1222ae-e896-11ea-b58c-0242ac11000b to disappear
Aug 27 18:55:23.269: INFO: Pod var-expansion-da1222ae-e896-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:55:23.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-2c99x" for this suite.
Aug 27 18:55:29.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:55:29.420: INFO: namespace: e2e-tests-var-expansion-2c99x, resource: bindings, ignored listing per whitelist
Aug 27 18:55:29.429: INFO: namespace e2e-tests-var-expansion-2c99x deletion completed in 6.155788619s

• [SLOW TEST:10.649 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:55:29.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 27 18:55:39.709: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-hngzh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:55:39.709: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:55:39.751931       6 log.go:172] (0xc002084160) (0xc001aaa3c0) Create stream
I0827 18:55:39.751964       6 log.go:172] (0xc002084160) (0xc001aaa3c0) Stream added, broadcasting: 1
I0827 18:55:39.754438       6 log.go:172] (0xc002084160) Reply frame received for 1
I0827 18:55:39.754494       6 log.go:172] (0xc002084160) (0xc000aaadc0) Create stream
I0827 18:55:39.754511       6 log.go:172] (0xc002084160) (0xc000aaadc0) Stream added, broadcasting: 3
I0827 18:55:39.755600       6 log.go:172] (0xc002084160) Reply frame received for 3
I0827 18:55:39.755638       6 log.go:172] (0xc002084160) (0xc002218be0) Create stream
I0827 18:55:39.755662       6 log.go:172] (0xc002084160) (0xc002218be0) Stream added, broadcasting: 5
I0827 18:55:39.756921       6 log.go:172] (0xc002084160) Reply frame received for 5
I0827 18:55:39.841512       6 log.go:172] (0xc002084160) Data frame received for 3
I0827 18:55:39.841562       6 log.go:172] (0xc000aaadc0) (3) Data frame handling
I0827 18:55:39.841586       6 log.go:172] (0xc000aaadc0) (3) Data frame sent
I0827 18:55:39.841611       6 log.go:172] (0xc002084160) Data frame received for 3
I0827 18:55:39.841620       6 log.go:172] (0xc000aaadc0) (3) Data frame handling
I0827 18:55:39.841657       6 log.go:172] (0xc002084160) Data frame received for 5
I0827 18:55:39.841691       6 log.go:172] (0xc002218be0) (5) Data frame handling
I0827 18:55:39.843342       6 log.go:172] (0xc002084160) Data frame received for 1
I0827 18:55:39.843383       6 log.go:172] (0xc001aaa3c0) (1) Data frame handling
I0827 18:55:39.843415       6 log.go:172] (0xc001aaa3c0) (1) Data frame sent
I0827 18:55:39.843437       6 log.go:172] (0xc002084160) (0xc001aaa3c0) Stream removed, broadcasting: 1
I0827 18:55:39.843455       6 log.go:172] (0xc002084160) Go away received
I0827 18:55:39.843618       6 log.go:172] (0xc002084160) (0xc001aaa3c0) Stream removed, broadcasting: 1
I0827 18:55:39.843651       6 log.go:172] (0xc002084160) (0xc000aaadc0) Stream removed, broadcasting: 3
I0827 18:55:39.843675       6 log.go:172] (0xc002084160) (0xc002218be0) Stream removed, broadcasting: 5
Aug 27 18:55:39.843: INFO: Exec stderr: ""
Aug 27 18:55:39.843: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-hngzh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:55:39.843: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:55:39.875463       6 log.go:172] (0xc000c2a2c0) (0xc000aab0e0) Create stream
I0827 18:55:39.875489       6 log.go:172] (0xc000c2a2c0) (0xc000aab0e0) Stream added, broadcasting: 1
I0827 18:55:39.879490       6 log.go:172] (0xc000c2a2c0) Reply frame received for 1
I0827 18:55:39.879556       6 log.go:172] (0xc000c2a2c0) (0xc002218c80) Create stream
I0827 18:55:39.879590       6 log.go:172] (0xc000c2a2c0) (0xc002218c80) Stream added, broadcasting: 3
I0827 18:55:39.881143       6 log.go:172] (0xc000c2a2c0) Reply frame received for 3
I0827 18:55:39.881181       6 log.go:172] (0xc000c2a2c0) (0xc002218d20) Create stream
I0827 18:55:39.881195       6 log.go:172] (0xc000c2a2c0) (0xc002218d20) Stream added, broadcasting: 5
I0827 18:55:39.882140       6 log.go:172] (0xc000c2a2c0) Reply frame received for 5
I0827 18:55:39.968309       6 log.go:172] (0xc000c2a2c0) Data frame received for 5
I0827 18:55:39.968359       6 log.go:172] (0xc002218d20) (5) Data frame handling
I0827 18:55:39.968389       6 log.go:172] (0xc000c2a2c0) Data frame received for 3
I0827 18:55:39.968402       6 log.go:172] (0xc002218c80) (3) Data frame handling
I0827 18:55:39.968415       6 log.go:172] (0xc002218c80) (3) Data frame sent
I0827 18:55:39.968428       6 log.go:172] (0xc000c2a2c0) Data frame received for 3
I0827 18:55:39.968439       6 log.go:172] (0xc002218c80) (3) Data frame handling
I0827 18:55:39.969790       6 log.go:172] (0xc000c2a2c0) Data frame received for 1
I0827 18:55:39.969808       6 log.go:172] (0xc000aab0e0) (1) Data frame handling
I0827 18:55:39.969818       6 log.go:172] (0xc000aab0e0) (1) Data frame sent
I0827 18:55:39.969828       6 log.go:172] (0xc000c2a2c0) (0xc000aab0e0) Stream removed, broadcasting: 1
I0827 18:55:39.969841       6 log.go:172] (0xc000c2a2c0) Go away received
I0827 18:55:39.969954       6 log.go:172] (0xc000c2a2c0) (0xc000aab0e0) Stream removed, broadcasting: 1
I0827 18:55:39.969972       6 log.go:172] (0xc000c2a2c0) (0xc002218c80) Stream removed, broadcasting: 3
I0827 18:55:39.969991       6 log.go:172] (0xc000c2a2c0) (0xc002218d20) Stream removed, broadcasting: 5
Aug 27 18:55:39.970: INFO: Exec stderr: ""
Aug 27 18:55:39.970: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-hngzh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:55:39.970: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:55:39.999228       6 log.go:172] (0xc001cd2580) (0xc00141ba40) Create stream
I0827 18:55:39.999269       6 log.go:172] (0xc001cd2580) (0xc00141ba40) Stream added, broadcasting: 1
I0827 18:55:40.001365       6 log.go:172] (0xc001cd2580) Reply frame received for 1
I0827 18:55:40.001412       6 log.go:172] (0xc001cd2580) (0xc002218dc0) Create stream
I0827 18:55:40.001430       6 log.go:172] (0xc001cd2580) (0xc002218dc0) Stream added, broadcasting: 3
I0827 18:55:40.002281       6 log.go:172] (0xc001cd2580) Reply frame received for 3
I0827 18:55:40.002314       6 log.go:172] (0xc001cd2580) (0xc00141bae0) Create stream
I0827 18:55:40.002333       6 log.go:172] (0xc001cd2580) (0xc00141bae0) Stream added, broadcasting: 5
I0827 18:55:40.003218       6 log.go:172] (0xc001cd2580) Reply frame received for 5
I0827 18:55:40.071910       6 log.go:172] (0xc001cd2580) Data frame received for 5
I0827 18:55:40.071944       6 log.go:172] (0xc00141bae0) (5) Data frame handling
I0827 18:55:40.071994       6 log.go:172] (0xc001cd2580) Data frame received for 3
I0827 18:55:40.072043       6 log.go:172] (0xc002218dc0) (3) Data frame handling
I0827 18:55:40.072068       6 log.go:172] (0xc002218dc0) (3) Data frame sent
I0827 18:55:40.072086       6 log.go:172] (0xc001cd2580) Data frame received for 3
I0827 18:55:40.072098       6 log.go:172] (0xc002218dc0) (3) Data frame handling
I0827 18:55:40.073472       6 log.go:172] (0xc001cd2580) Data frame received for 1
I0827 18:55:40.073498       6 log.go:172] (0xc00141ba40) (1) Data frame handling
I0827 18:55:40.073512       6 log.go:172] (0xc00141ba40) (1) Data frame sent
I0827 18:55:40.073522       6 log.go:172] (0xc001cd2580) (0xc00141ba40) Stream removed, broadcasting: 1
I0827 18:55:40.073537       6 log.go:172] (0xc001cd2580) Go away received
I0827 18:55:40.073689       6 log.go:172] (0xc001cd2580) (0xc00141ba40) Stream removed, broadcasting: 1
I0827 18:55:40.073712       6 log.go:172] (0xc001cd2580) (0xc002218dc0) Stream removed, broadcasting: 3
I0827 18:55:40.073725       6 log.go:172] (0xc001cd2580) (0xc00141bae0) Stream removed, broadcasting: 5
Aug 27 18:55:40.073: INFO: Exec stderr: ""
Aug 27 18:55:40.073: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-hngzh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:55:40.073: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:55:40.103950       6 log.go:172] (0xc001cd2a50) (0xc00141be00) Create stream
I0827 18:55:40.103982       6 log.go:172] (0xc001cd2a50) (0xc00141be00) Stream added, broadcasting: 1
I0827 18:55:40.110919       6 log.go:172] (0xc001cd2a50) Reply frame received for 1
I0827 18:55:40.110990       6 log.go:172] (0xc001cd2a50) (0xc00141bf40) Create stream
I0827 18:55:40.111006       6 log.go:172] (0xc001cd2a50) (0xc00141bf40) Stream added, broadcasting: 3
I0827 18:55:40.113071       6 log.go:172] (0xc001cd2a50) Reply frame received for 3
I0827 18:55:40.113126       6 log.go:172] (0xc001cd2a50) (0xc001910000) Create stream
I0827 18:55:40.113141       6 log.go:172] (0xc001cd2a50) (0xc001910000) Stream added, broadcasting: 5
I0827 18:55:40.116700       6 log.go:172] (0xc001cd2a50) Reply frame received for 5
I0827 18:55:40.194343       6 log.go:172] (0xc001cd2a50) Data frame received for 5
I0827 18:55:40.194369       6 log.go:172] (0xc001910000) (5) Data frame handling
I0827 18:55:40.194393       6 log.go:172] (0xc001cd2a50) Data frame received for 3
I0827 18:55:40.194403       6 log.go:172] (0xc00141bf40) (3) Data frame handling
I0827 18:55:40.194415       6 log.go:172] (0xc00141bf40) (3) Data frame sent
I0827 18:55:40.194425       6 log.go:172] (0xc001cd2a50) Data frame received for 3
I0827 18:55:40.194432       6 log.go:172] (0xc00141bf40) (3) Data frame handling
I0827 18:55:40.195600       6 log.go:172] (0xc001cd2a50) Data frame received for 1
I0827 18:55:40.195640       6 log.go:172] (0xc00141be00) (1) Data frame handling
I0827 18:55:40.195665       6 log.go:172] (0xc00141be00) (1) Data frame sent
I0827 18:55:40.195692       6 log.go:172] (0xc001cd2a50) (0xc00141be00) Stream removed, broadcasting: 1
I0827 18:55:40.195718       6 log.go:172] (0xc001cd2a50) Go away received
I0827 18:55:40.195866       6 log.go:172] (0xc001cd2a50) (0xc00141be00) Stream removed, broadcasting: 1
I0827 18:55:40.195901       6 log.go:172] (0xc001cd2a50) (0xc00141bf40) Stream removed, broadcasting: 3
I0827 18:55:40.195926       6 log.go:172] (0xc001cd2a50) (0xc001910000) Stream removed, broadcasting: 5
Aug 27 18:55:40.195: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 27 18:55:40.196: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-hngzh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:55:40.196: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:55:40.233502       6 log.go:172] (0xc000c2a790) (0xc000aab360) Create stream
I0827 18:55:40.233537       6 log.go:172] (0xc000c2a790) (0xc000aab360) Stream added, broadcasting: 1
I0827 18:55:40.242511       6 log.go:172] (0xc000c2a790) Reply frame received for 1
I0827 18:55:40.242596       6 log.go:172] (0xc000c2a790) (0xc0009a60a0) Create stream
I0827 18:55:40.242636       6 log.go:172] (0xc000c2a790) (0xc0009a60a0) Stream added, broadcasting: 3
I0827 18:55:40.243581       6 log.go:172] (0xc000c2a790) Reply frame received for 3
I0827 18:55:40.243640       6 log.go:172] (0xc000c2a790) (0xc000504000) Create stream
I0827 18:55:40.243661       6 log.go:172] (0xc000c2a790) (0xc000504000) Stream added, broadcasting: 5
I0827 18:55:40.245272       6 log.go:172] (0xc000c2a790) Reply frame received for 5
I0827 18:55:40.290327       6 log.go:172] (0xc000c2a790) Data frame received for 3
I0827 18:55:40.290373       6 log.go:172] (0xc0009a60a0) (3) Data frame handling
I0827 18:55:40.290412       6 log.go:172] (0xc0009a60a0) (3) Data frame sent
I0827 18:55:40.290439       6 log.go:172] (0xc000c2a790) Data frame received for 3
I0827 18:55:40.290460       6 log.go:172] (0xc0009a60a0) (3) Data frame handling
I0827 18:55:40.290765       6 log.go:172] (0xc000c2a790) Data frame received for 5
I0827 18:55:40.290793       6 log.go:172] (0xc000504000) (5) Data frame handling
I0827 18:55:40.292362       6 log.go:172] (0xc000c2a790) Data frame received for 1
I0827 18:55:40.292388       6 log.go:172] (0xc000aab360) (1) Data frame handling
I0827 18:55:40.292406       6 log.go:172] (0xc000aab360) (1) Data frame sent
I0827 18:55:40.292429       6 log.go:172] (0xc000c2a790) (0xc000aab360) Stream removed, broadcasting: 1
I0827 18:55:40.292531       6 log.go:172] (0xc000c2a790) (0xc000aab360) Stream removed, broadcasting: 1
I0827 18:55:40.292558       6 log.go:172] (0xc000c2a790) (0xc0009a60a0) Stream removed, broadcasting: 3
I0827 18:55:40.292810       6 log.go:172] (0xc000c2a790) (0xc000504000) Stream removed, broadcasting: 5
Aug 27 18:55:40.293: INFO: Exec stderr: ""
Aug 27 18:55:40.293: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-hngzh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:55:40.293: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:55:40.295134       6 log.go:172] (0xc000c2a790) Go away received
I0827 18:55:40.324303       6 log.go:172] (0xc000c2a0b0) (0xc00141a0a0) Create stream
I0827 18:55:40.324339       6 log.go:172] (0xc000c2a0b0) (0xc00141a0a0) Stream added, broadcasting: 1
I0827 18:55:40.326002       6 log.go:172] (0xc000c2a0b0) Reply frame received for 1
I0827 18:55:40.326036       6 log.go:172] (0xc000c2a0b0) (0xc0009a6320) Create stream
I0827 18:55:40.326060       6 log.go:172] (0xc000c2a0b0) (0xc0009a6320) Stream added, broadcasting: 3
I0827 18:55:40.327037       6 log.go:172] (0xc000c2a0b0) Reply frame received for 3
I0827 18:55:40.327075       6 log.go:172] (0xc000c2a0b0) (0xc0005040a0) Create stream
I0827 18:55:40.327089       6 log.go:172] (0xc000c2a0b0) (0xc0005040a0) Stream added, broadcasting: 5
I0827 18:55:40.328004       6 log.go:172] (0xc000c2a0b0) Reply frame received for 5
I0827 18:55:40.413669       6 log.go:172] (0xc000c2a0b0) Data frame received for 3
I0827 18:55:40.413709       6 log.go:172] (0xc0009a6320) (3) Data frame handling
I0827 18:55:40.413723       6 log.go:172] (0xc0009a6320) (3) Data frame sent
I0827 18:55:40.413729       6 log.go:172] (0xc000c2a0b0) Data frame received for 3
I0827 18:55:40.413736       6 log.go:172] (0xc0009a6320) (3) Data frame handling
I0827 18:55:40.413777       6 log.go:172] (0xc000c2a0b0) Data frame received for 5
I0827 18:55:40.413808       6 log.go:172] (0xc0005040a0) (5) Data frame handling
I0827 18:55:40.420874       6 log.go:172] (0xc000c2a0b0) Data frame received for 1
I0827 18:55:40.420887       6 log.go:172] (0xc00141a0a0) (1) Data frame handling
I0827 18:55:40.420894       6 log.go:172] (0xc00141a0a0) (1) Data frame sent
I0827 18:55:40.420900       6 log.go:172] (0xc000c2a0b0) (0xc00141a0a0) Stream removed, broadcasting: 1
I0827 18:55:40.420920       6 log.go:172] (0xc000c2a0b0) Go away received
I0827 18:55:40.421000       6 log.go:172] (0xc000c2a0b0) (0xc00141a0a0) Stream removed, broadcasting: 1
I0827 18:55:40.421024       6 log.go:172] (0xc000c2a0b0) (0xc0009a6320) Stream removed, broadcasting: 3
I0827 18:55:40.421041       6 log.go:172] (0xc000c2a0b0) (0xc0005040a0) Stream removed, broadcasting: 5
Aug 27 18:55:40.421: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 27 18:55:40.421: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-hngzh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:55:40.421: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:55:40.448175       6 log.go:172] (0xc000c2a630) (0xc00141a3c0) Create stream
I0827 18:55:40.448208       6 log.go:172] (0xc000c2a630) (0xc00141a3c0) Stream added, broadcasting: 1
I0827 18:55:40.458711       6 log.go:172] (0xc000c2a630) Reply frame received for 1
I0827 18:55:40.458766       6 log.go:172] (0xc000c2a630) (0xc002668000) Create stream
I0827 18:55:40.458788       6 log.go:172] (0xc000c2a630) (0xc002668000) Stream added, broadcasting: 3
I0827 18:55:40.462004       6 log.go:172] (0xc000c2a630) Reply frame received for 3
I0827 18:55:40.462031       6 log.go:172] (0xc000c2a630) (0xc00141a460) Create stream
I0827 18:55:40.462040       6 log.go:172] (0xc000c2a630) (0xc00141a460) Stream added, broadcasting: 5
I0827 18:55:40.463589       6 log.go:172] (0xc000c2a630) Reply frame received for 5
I0827 18:55:40.523495       6 log.go:172] (0xc000c2a630) Data frame received for 3
I0827 18:55:40.523522       6 log.go:172] (0xc002668000) (3) Data frame handling
I0827 18:55:40.523529       6 log.go:172] (0xc002668000) (3) Data frame sent
I0827 18:55:40.523535       6 log.go:172] (0xc000c2a630) Data frame received for 3
I0827 18:55:40.523539       6 log.go:172] (0xc002668000) (3) Data frame handling
I0827 18:55:40.523565       6 log.go:172] (0xc000c2a630) Data frame received for 5
I0827 18:55:40.523572       6 log.go:172] (0xc00141a460) (5) Data frame handling
I0827 18:55:40.525176       6 log.go:172] (0xc000c2a630) Data frame received for 1
I0827 18:55:40.525207       6 log.go:172] (0xc00141a3c0) (1) Data frame handling
I0827 18:55:40.525231       6 log.go:172] (0xc00141a3c0) (1) Data frame sent
I0827 18:55:40.525285       6 log.go:172] (0xc000c2a630) (0xc00141a3c0) Stream removed, broadcasting: 1
I0827 18:55:40.525312       6 log.go:172] (0xc000c2a630) Go away received
I0827 18:55:40.525440       6 log.go:172] (0xc000c2a630) (0xc00141a3c0) Stream removed, broadcasting: 1
I0827 18:55:40.525461       6 log.go:172] (0xc000c2a630) (0xc002668000) Stream removed, broadcasting: 3
I0827 18:55:40.525468       6 log.go:172] (0xc000c2a630) (0xc00141a460) Stream removed, broadcasting: 5
Aug 27 18:55:40.525: INFO: Exec stderr: ""
Aug 27 18:55:40.525: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-hngzh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:55:40.525: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:55:40.553206       6 log.go:172] (0xc001cd2420) (0xc0005045a0) Create stream
I0827 18:55:40.553232       6 log.go:172] (0xc001cd2420) (0xc0005045a0) Stream added, broadcasting: 1
I0827 18:55:40.555400       6 log.go:172] (0xc001cd2420) Reply frame received for 1
I0827 18:55:40.555434       6 log.go:172] (0xc001cd2420) (0xc0009a63c0) Create stream
I0827 18:55:40.555446       6 log.go:172] (0xc001cd2420) (0xc0009a63c0) Stream added, broadcasting: 3
I0827 18:55:40.556313       6 log.go:172] (0xc001cd2420) Reply frame received for 3
I0827 18:55:40.556334       6 log.go:172] (0xc001cd2420) (0xc0009a6500) Create stream
I0827 18:55:40.556340       6 log.go:172] (0xc001cd2420) (0xc0009a6500) Stream added, broadcasting: 5
I0827 18:55:40.557299       6 log.go:172] (0xc001cd2420) Reply frame received for 5
I0827 18:55:40.619096       6 log.go:172] (0xc001cd2420) Data frame received for 5
I0827 18:55:40.619145       6 log.go:172] (0xc0009a6500) (5) Data frame handling
I0827 18:55:40.619177       6 log.go:172] (0xc001cd2420) Data frame received for 3
I0827 18:55:40.619191       6 log.go:172] (0xc0009a63c0) (3) Data frame handling
I0827 18:55:40.619214       6 log.go:172] (0xc0009a63c0) (3) Data frame sent
I0827 18:55:40.619231       6 log.go:172] (0xc001cd2420) Data frame received for 3
I0827 18:55:40.619243       6 log.go:172] (0xc0009a63c0) (3) Data frame handling
I0827 18:55:40.620224       6 log.go:172] (0xc001cd2420) Data frame received for 1
I0827 18:55:40.620247       6 log.go:172] (0xc0005045a0) (1) Data frame handling
I0827 18:55:40.620265       6 log.go:172] (0xc0005045a0) (1) Data frame sent
I0827 18:55:40.620287       6 log.go:172] (0xc001cd2420) (0xc0005045a0) Stream removed, broadcasting: 1
I0827 18:55:40.620367       6 log.go:172] (0xc001cd2420) Go away received
I0827 18:55:40.620399       6 log.go:172] (0xc001cd2420) (0xc0005045a0) Stream removed, broadcasting: 1
I0827 18:55:40.620424       6 log.go:172] (0xc001cd2420) (0xc0009a63c0) Stream removed, broadcasting: 3
I0827 18:55:40.620433       6 log.go:172] (0xc001cd2420) (0xc0009a6500) Stream removed, broadcasting: 5
Aug 27 18:55:40.620: INFO: Exec stderr: ""
Aug 27 18:55:40.620: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-hngzh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:55:40.620: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:55:40.656276       6 log.go:172] (0xc000d9f600) (0xc002668280) Create stream
I0827 18:55:40.656309       6 log.go:172] (0xc000d9f600) (0xc002668280) Stream added, broadcasting: 1
I0827 18:55:40.658755       6 log.go:172] (0xc000d9f600) Reply frame received for 1
I0827 18:55:40.658809       6 log.go:172] (0xc000d9f600) (0xc002668320) Create stream
I0827 18:55:40.658826       6 log.go:172] (0xc000d9f600) (0xc002668320) Stream added, broadcasting: 3
I0827 18:55:40.659721       6 log.go:172] (0xc000d9f600) Reply frame received for 3
I0827 18:55:40.659762       6 log.go:172] (0xc000d9f600) (0xc0026683c0) Create stream
I0827 18:55:40.659778       6 log.go:172] (0xc000d9f600) (0xc0026683c0) Stream added, broadcasting: 5
I0827 18:55:40.660535       6 log.go:172] (0xc000d9f600) Reply frame received for 5
I0827 18:55:40.723824       6 log.go:172] (0xc000d9f600) Data frame received for 5
I0827 18:55:40.723867       6 log.go:172] (0xc0026683c0) (5) Data frame handling
I0827 18:55:40.723907       6 log.go:172] (0xc000d9f600) Data frame received for 3
I0827 18:55:40.723922       6 log.go:172] (0xc002668320) (3) Data frame handling
I0827 18:55:40.723943       6 log.go:172] (0xc002668320) (3) Data frame sent
I0827 18:55:40.723954       6 log.go:172] (0xc000d9f600) Data frame received for 3
I0827 18:55:40.723959       6 log.go:172] (0xc002668320) (3) Data frame handling
I0827 18:55:40.724961       6 log.go:172] (0xc000d9f600) Data frame received for 1
I0827 18:55:40.724982       6 log.go:172] (0xc002668280) (1) Data frame handling
I0827 18:55:40.725006       6 log.go:172] (0xc002668280) (1) Data frame sent
I0827 18:55:40.725025       6 log.go:172] (0xc000d9f600) (0xc002668280) Stream removed, broadcasting: 1
I0827 18:55:40.725121       6 log.go:172] (0xc000d9f600) (0xc002668280) Stream removed, broadcasting: 1
I0827 18:55:40.725140       6 log.go:172] (0xc000d9f600) (0xc002668320) Stream removed, broadcasting: 3
I0827 18:55:40.725153       6 log.go:172] (0xc000d9f600) (0xc0026683c0) Stream removed, broadcasting: 5
Aug 27 18:55:40.725: INFO: Exec stderr: ""
I0827 18:55:40.725177       6 log.go:172] (0xc000d9f600) Go away received
Aug 27 18:55:40.725: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-hngzh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 18:55:40.725: INFO: >>> kubeConfig: /root/.kube/config
I0827 18:55:40.754589       6 log.go:172] (0xc0029384d0) (0xc002326460) Create stream
I0827 18:55:40.754627       6 log.go:172] (0xc0029384d0) (0xc002326460) Stream added, broadcasting: 1
I0827 18:55:40.756904       6 log.go:172] (0xc0029384d0) Reply frame received for 1
I0827 18:55:40.756962       6 log.go:172] (0xc0029384d0) (0xc002668460) Create stream
I0827 18:55:40.756985       6 log.go:172] (0xc0029384d0) (0xc002668460) Stream added, broadcasting: 3
I0827 18:55:40.758020       6 log.go:172] (0xc0029384d0) Reply frame received for 3
I0827 18:55:40.758065       6 log.go:172] (0xc0029384d0) (0xc002326500) Create stream
I0827 18:55:40.758079       6 log.go:172] (0xc0029384d0) (0xc002326500) Stream added, broadcasting: 5
I0827 18:55:40.759066       6 log.go:172] (0xc0029384d0) Reply frame received for 5
I0827 18:55:40.827239       6 log.go:172] (0xc0029384d0) Data frame received for 5
I0827 18:55:40.827298       6 log.go:172] (0xc002326500) (5) Data frame handling
I0827 18:55:40.827363       6 log.go:172] (0xc0029384d0) Data frame received for 3
I0827 18:55:40.827420       6 log.go:172] (0xc002668460) (3) Data frame handling
I0827 18:55:40.827450       6 log.go:172] (0xc002668460) (3) Data frame sent
I0827 18:55:40.827471       6 log.go:172] (0xc0029384d0) Data frame received for 3
I0827 18:55:40.827484       6 log.go:172] (0xc002668460) (3) Data frame handling
I0827 18:55:40.828656       6 log.go:172] (0xc0029384d0) Data frame received for 1
I0827 18:55:40.828677       6 log.go:172] (0xc002326460) (1) Data frame handling
I0827 18:55:40.828702       6 log.go:172] (0xc002326460) (1) Data frame sent
I0827 18:55:40.829097       6 log.go:172] (0xc0029384d0) (0xc002326460) Stream removed, broadcasting: 1
I0827 18:55:40.829142       6 log.go:172] (0xc0029384d0) Go away received
I0827 18:55:40.829285       6 log.go:172] (0xc0029384d0) (0xc002326460) Stream removed, broadcasting: 1
I0827 18:55:40.829329       6 log.go:172] (0xc0029384d0) (0xc002668460) Stream removed, broadcasting: 3
I0827 18:55:40.829358       6 log.go:172] (0xc0029384d0) (0xc002326500) Stream removed, broadcasting: 5
Aug 27 18:55:40.829: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:55:40.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-hngzh" for this suite.
Aug 27 18:56:30.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:56:30.863: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-hngzh, resource: bindings, ignored listing per whitelist
Aug 27 18:56:30.931: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-hngzh deletion completed in 50.097827892s

• [SLOW TEST:61.501 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:56:30.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-0510fd9e-e897-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 18:56:31.107: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-05118f8c-e897-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-6r4jj" to be "success or failure"
Aug 27 18:56:31.111: INFO: Pod "pod-projected-secrets-05118f8c-e897-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.63609ms
Aug 27 18:56:33.115: INFO: Pod "pod-projected-secrets-05118f8c-e897-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00787005s
Aug 27 18:56:35.119: INFO: Pod "pod-projected-secrets-05118f8c-e897-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011408313s
STEP: Saw pod success
Aug 27 18:56:35.119: INFO: Pod "pod-projected-secrets-05118f8c-e897-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:56:35.121: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-05118f8c-e897-11ea-b58c-0242ac11000b container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 18:56:35.160: INFO: Waiting for pod pod-projected-secrets-05118f8c-e897-11ea-b58c-0242ac11000b to disappear
Aug 27 18:56:35.171: INFO: Pod pod-projected-secrets-05118f8c-e897-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:56:35.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6r4jj" for this suite.
Aug 27 18:56:41.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:56:41.257: INFO: namespace: e2e-tests-projected-6r4jj, resource: bindings, ignored listing per whitelist
Aug 27 18:56:41.261: INFO: namespace e2e-tests-projected-6r4jj deletion completed in 6.087373619s

• [SLOW TEST:10.330 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:56:41.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-0b3405f7-e897-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume configMaps
Aug 27 18:56:41.415: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b34b366-e897-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-6nk4w" to be "success or failure"
Aug 27 18:56:41.430: INFO: Pod "pod-projected-configmaps-0b34b366-e897-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.704917ms
Aug 27 18:56:43.433: INFO: Pod "pod-projected-configmaps-0b34b366-e897-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01806514s
Aug 27 18:56:45.437: INFO: Pod "pod-projected-configmaps-0b34b366-e897-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021808798s
STEP: Saw pod success
Aug 27 18:56:45.437: INFO: Pod "pod-projected-configmaps-0b34b366-e897-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:56:45.439: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-0b34b366-e897-11ea-b58c-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 18:56:45.607: INFO: Waiting for pod pod-projected-configmaps-0b34b366-e897-11ea-b58c-0242ac11000b to disappear
Aug 27 18:56:45.737: INFO: Pod pod-projected-configmaps-0b34b366-e897-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:56:45.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6nk4w" for this suite.
Aug 27 18:56:51.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:56:51.826: INFO: namespace: e2e-tests-projected-6nk4w, resource: bindings, ignored listing per whitelist
Aug 27 18:56:51.866: INFO: namespace e2e-tests-projected-6nk4w deletion completed in 6.125135237s

• [SLOW TEST:10.604 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:56:51.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Aug 27 18:56:51.980: INFO: Waiting up to 5m0s for pod "client-containers-118164bc-e897-11ea-b58c-0242ac11000b" in namespace "e2e-tests-containers-jc2l6" to be "success or failure"
Aug 27 18:56:51.997: INFO: Pod "client-containers-118164bc-e897-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.173623ms
Aug 27 18:56:54.000: INFO: Pod "client-containers-118164bc-e897-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019907539s
Aug 27 18:56:56.003: INFO: Pod "client-containers-118164bc-e897-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022956266s
STEP: Saw pod success
Aug 27 18:56:56.004: INFO: Pod "client-containers-118164bc-e897-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:56:56.005: INFO: Trying to get logs from node hunter-worker pod client-containers-118164bc-e897-11ea-b58c-0242ac11000b container test-container: 
STEP: delete the pod
Aug 27 18:56:56.058: INFO: Waiting for pod client-containers-118164bc-e897-11ea-b58c-0242ac11000b to disappear
Aug 27 18:56:56.069: INFO: Pod client-containers-118164bc-e897-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:56:56.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-jc2l6" for this suite.
Aug 27 18:57:02.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:57:02.135: INFO: namespace: e2e-tests-containers-jc2l6, resource: bindings, ignored listing per whitelist
Aug 27 18:57:02.177: INFO: namespace e2e-tests-containers-jc2l6 deletion completed in 6.105041702s

• [SLOW TEST:10.310 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:57:02.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:57:08.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-bfqpk" for this suite.
Aug 27 18:57:48.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:57:48.536: INFO: namespace: e2e-tests-kubelet-test-bfqpk, resource: bindings, ignored listing per whitelist
Aug 27 18:57:48.592: INFO: namespace e2e-tests-kubelet-test-bfqpk deletion completed in 40.136519333s

• [SLOW TEST:46.415 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:57:48.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 18:57:48.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:57:52.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-r9ffm" for this suite.
Aug 27 18:58:40.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:58:40.949: INFO: namespace: e2e-tests-pods-r9ffm, resource: bindings, ignored listing per whitelist
Aug 27 18:58:40.999: INFO: namespace e2e-tests-pods-r9ffm deletion completed in 48.090093867s

• [SLOW TEST:52.407 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:58:40.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 18:58:41.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 27 18:58:41.255: INFO: stderr: ""
Aug 27 18:58:41.255: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-08-23T03:53:49Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:58:41.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5mbz4" for this suite.
Aug 27 18:58:47.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:58:47.344: INFO: namespace: e2e-tests-kubectl-5mbz4, resource: bindings, ignored listing per whitelist
Aug 27 18:58:47.364: INFO: namespace e2e-tests-kubectl-5mbz4 deletion completed in 6.085369894s

• [SLOW TEST:6.364 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:58:47.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-565b0167-e897-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 18:58:47.493: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-565cf52d-e897-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-62bdh" to be "success or failure"
Aug 27 18:58:47.497: INFO: Pod "pod-projected-secrets-565cf52d-e897-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.951752ms
Aug 27 18:58:49.501: INFO: Pod "pod-projected-secrets-565cf52d-e897-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008007785s
Aug 27 18:58:51.507: INFO: Pod "pod-projected-secrets-565cf52d-e897-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013646705s
STEP: Saw pod success
Aug 27 18:58:51.507: INFO: Pod "pod-projected-secrets-565cf52d-e897-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 18:58:51.512: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-565cf52d-e897-11ea-b58c-0242ac11000b container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 18:58:51.563: INFO: Waiting for pod pod-projected-secrets-565cf52d-e897-11ea-b58c-0242ac11000b to disappear
Aug 27 18:58:51.614: INFO: Pod pod-projected-secrets-565cf52d-e897-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 18:58:51.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-62bdh" for this suite.
Aug 27 18:58:57.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 18:58:57.735: INFO: namespace: e2e-tests-projected-62bdh, resource: bindings, ignored listing per whitelist
Aug 27 18:58:57.756: INFO: namespace e2e-tests-projected-62bdh deletion completed in 6.136882988s

• [SLOW TEST:10.392 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 18:58:57.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-5c92f8c5-e897-11ea-b58c-0242ac11000b
STEP: Creating configMap with name cm-test-opt-upd-5c92f916-e897-11ea-b58c-0242ac11000b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5c92f8c5-e897-11ea-b58c-0242ac11000b
STEP: Updating configmap cm-test-opt-upd-5c92f916-e897-11ea-b58c-0242ac11000b
STEP: Creating configMap with name cm-test-opt-create-5c92f92c-e897-11ea-b58c-0242ac11000b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:00:12.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4p22x" for this suite.
Aug 27 19:00:36.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:00:36.612: INFO: namespace: e2e-tests-projected-4p22x, resource: bindings, ignored listing per whitelist
Aug 27 19:00:36.622: INFO: namespace e2e-tests-projected-4p22x deletion completed in 24.080070813s

• [SLOW TEST:98.866 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:00:36.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Aug 27 19:00:36.830: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:00:36.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bsr4g" for this suite.
Aug 27 19:00:43.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:00:43.057: INFO: namespace: e2e-tests-kubectl-bsr4g, resource: bindings, ignored listing per whitelist
Aug 27 19:00:43.114: INFO: namespace e2e-tests-kubectl-bsr4g deletion completed in 6.203986002s

• [SLOW TEST:6.492 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:00:43.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-bpz8
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 19:00:43.576: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bpz8" in namespace "e2e-tests-subpath-ds8wt" to be "success or failure"
Aug 27 19:00:43.591: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.088727ms
Aug 27 19:00:45.595: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018592253s
Aug 27 19:00:47.599: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022928585s
Aug 27 19:00:49.603: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026956646s
Aug 27 19:00:51.606: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Running", Reason="", readiness=false. Elapsed: 8.02992925s
Aug 27 19:00:53.612: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Running", Reason="", readiness=false. Elapsed: 10.035024734s
Aug 27 19:00:55.617: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Running", Reason="", readiness=false. Elapsed: 12.040004703s
Aug 27 19:00:57.621: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Running", Reason="", readiness=false. Elapsed: 14.044525411s
Aug 27 19:00:59.625: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Running", Reason="", readiness=false. Elapsed: 16.048427149s
Aug 27 19:01:01.629: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Running", Reason="", readiness=false. Elapsed: 18.052891443s
Aug 27 19:01:03.634: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Running", Reason="", readiness=false. Elapsed: 20.057677501s
Aug 27 19:01:05.639: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Running", Reason="", readiness=false. Elapsed: 22.062170379s
Aug 27 19:01:07.643: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Running", Reason="", readiness=false. Elapsed: 24.066824873s
Aug 27 19:01:09.647: INFO: Pod "pod-subpath-test-downwardapi-bpz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.070385884s
STEP: Saw pod success
Aug 27 19:01:09.647: INFO: Pod "pod-subpath-test-downwardapi-bpz8" satisfied condition "success or failure"
Aug 27 19:01:09.651: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-bpz8 container test-container-subpath-downwardapi-bpz8: 
STEP: delete the pod
Aug 27 19:01:09.690: INFO: Waiting for pod pod-subpath-test-downwardapi-bpz8 to disappear
Aug 27 19:01:09.704: INFO: Pod pod-subpath-test-downwardapi-bpz8 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-bpz8
Aug 27 19:01:09.704: INFO: Deleting pod "pod-subpath-test-downwardapi-bpz8" in namespace "e2e-tests-subpath-ds8wt"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:01:09.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-ds8wt" for this suite.
Aug 27 19:01:15.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:01:15.755: INFO: namespace: e2e-tests-subpath-ds8wt, resource: bindings, ignored listing per whitelist
Aug 27 19:01:15.811: INFO: namespace e2e-tests-subpath-ds8wt deletion completed in 6.101765829s

• [SLOW TEST:32.697 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:01:15.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-aee5e1f9-e897-11ea-b58c-0242ac11000b
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-aee5e1f9-e897-11ea-b58c-0242ac11000b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:01:22.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ksljc" for this suite.
Aug 27 19:01:44.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:01:44.203: INFO: namespace: e2e-tests-configmap-ksljc, resource: bindings, ignored listing per whitelist
Aug 27 19:01:44.244: INFO: namespace e2e-tests-configmap-ksljc deletion completed in 22.148892137s

• [SLOW TEST:28.433 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:01:44.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 27 19:01:56.631: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 19:01:56.653: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 19:01:58.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 19:01:58.658: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 19:02:00.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 19:02:00.657: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 19:02:02.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 19:02:02.663: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 19:02:04.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 19:02:04.657: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 19:02:06.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 19:02:06.656: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 19:02:08.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 19:02:08.656: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:02:08.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tnb7k" for this suite.
Aug 27 19:02:32.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:02:32.709: INFO: namespace: e2e-tests-container-lifecycle-hook-tnb7k, resource: bindings, ignored listing per whitelist
Aug 27 19:02:32.762: INFO: namespace e2e-tests-container-lifecycle-hook-tnb7k deletion completed in 24.102363375s

• [SLOW TEST:48.518 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:02:32.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 19:02:58.907: INFO: Container started at 2020-08-27 19:02:35 +0000 UTC, pod became ready at 2020-08-27 19:02:57 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:02:58.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-w4jms" for this suite.
Aug 27 19:03:22.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:03:22.950: INFO: namespace: e2e-tests-container-probe-w4jms, resource: bindings, ignored listing per whitelist
Aug 27 19:03:23.013: INFO: namespace e2e-tests-container-probe-w4jms deletion completed in 24.102103255s

• [SLOW TEST:50.250 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:03:23.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Aug 27 19:03:23.148: INFO: Waiting up to 5m0s for pod "var-expansion-faa91d6c-e897-11ea-b58c-0242ac11000b" in namespace "e2e-tests-var-expansion-tll4h" to be "success or failure"
Aug 27 19:03:23.152: INFO: Pod "var-expansion-faa91d6c-e897-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.930198ms
Aug 27 19:03:25.156: INFO: Pod "var-expansion-faa91d6c-e897-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007465732s
Aug 27 19:03:27.558: INFO: Pod "var-expansion-faa91d6c-e897-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.409818208s
Aug 27 19:03:29.562: INFO: Pod "var-expansion-faa91d6c-e897-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.413738208s
STEP: Saw pod success
Aug 27 19:03:29.562: INFO: Pod "var-expansion-faa91d6c-e897-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 19:03:29.565: INFO: Trying to get logs from node hunter-worker pod var-expansion-faa91d6c-e897-11ea-b58c-0242ac11000b container dapi-container: 
STEP: delete the pod
Aug 27 19:03:29.725: INFO: Waiting for pod var-expansion-faa91d6c-e897-11ea-b58c-0242ac11000b to disappear
Aug 27 19:03:29.888: INFO: Pod var-expansion-faa91d6c-e897-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:03:29.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-tll4h" for this suite.
Aug 27 19:03:35.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:03:35.925: INFO: namespace: e2e-tests-var-expansion-tll4h, resource: bindings, ignored listing per whitelist
Aug 27 19:03:36.012: INFO: namespace e2e-tests-var-expansion-tll4h deletion completed in 6.120336254s

• [SLOW TEST:12.999 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:03:36.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-026a0012-e898-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume configMaps
Aug 27 19:03:36.182: INFO: Waiting up to 5m0s for pod "pod-configmaps-026aecc4-e898-11ea-b58c-0242ac11000b" in namespace "e2e-tests-configmap-t2bbf" to be "success or failure"
Aug 27 19:03:36.207: INFO: Pod "pod-configmaps-026aecc4-e898-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.492878ms
Aug 27 19:03:38.211: INFO: Pod "pod-configmaps-026aecc4-e898-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02941208s
Aug 27 19:03:40.214: INFO: Pod "pod-configmaps-026aecc4-e898-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.032916409s
Aug 27 19:03:42.219: INFO: Pod "pod-configmaps-026aecc4-e898-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037628532s
STEP: Saw pod success
Aug 27 19:03:42.219: INFO: Pod "pod-configmaps-026aecc4-e898-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 19:03:42.223: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-026aecc4-e898-11ea-b58c-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Aug 27 19:03:42.284: INFO: Waiting for pod pod-configmaps-026aecc4-e898-11ea-b58c-0242ac11000b to disappear
Aug 27 19:03:42.292: INFO: Pod pod-configmaps-026aecc4-e898-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:03:42.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-t2bbf" for this suite.
Aug 27 19:03:48.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:03:48.462: INFO: namespace: e2e-tests-configmap-t2bbf, resource: bindings, ignored listing per whitelist
Aug 27 19:03:48.470: INFO: namespace e2e-tests-configmap-t2bbf deletion completed in 6.174525491s

• [SLOW TEST:12.457 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:03:48.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-nljtj/configmap-test-09d74632-e898-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume configMaps
Aug 27 19:03:48.617: INFO: Waiting up to 5m0s for pod "pod-configmaps-09d7d519-e898-11ea-b58c-0242ac11000b" in namespace "e2e-tests-configmap-nljtj" to be "success or failure"
Aug 27 19:03:48.621: INFO: Pod "pod-configmaps-09d7d519-e898-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.735523ms
Aug 27 19:03:50.635: INFO: Pod "pod-configmaps-09d7d519-e898-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01760425s
Aug 27 19:03:52.640: INFO: Pod "pod-configmaps-09d7d519-e898-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022039822s
STEP: Saw pod success
Aug 27 19:03:52.640: INFO: Pod "pod-configmaps-09d7d519-e898-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 19:03:52.642: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-09d7d519-e898-11ea-b58c-0242ac11000b container env-test: 
STEP: delete the pod
Aug 27 19:03:52.697: INFO: Waiting for pod pod-configmaps-09d7d519-e898-11ea-b58c-0242ac11000b to disappear
Aug 27 19:03:52.729: INFO: Pod pod-configmaps-09d7d519-e898-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:03:52.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-nljtj" for this suite.
Aug 27 19:03:58.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:03:58.760: INFO: namespace: e2e-tests-configmap-nljtj, resource: bindings, ignored listing per whitelist
Aug 27 19:03:58.840: INFO: namespace e2e-tests-configmap-nljtj deletion completed in 6.107338396s

• [SLOW TEST:10.370 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:03:58.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 27 19:03:58.964: INFO: Waiting up to 5m0s for pod "pod-0fff2b45-e898-11ea-b58c-0242ac11000b" in namespace "e2e-tests-emptydir-ghx6k" to be "success or failure"
Aug 27 19:03:58.978: INFO: Pod "pod-0fff2b45-e898-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.688404ms
Aug 27 19:04:00.981: INFO: Pod "pod-0fff2b45-e898-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016874875s
Aug 27 19:04:02.985: INFO: Pod "pod-0fff2b45-e898-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.02087205s
Aug 27 19:04:04.990: INFO: Pod "pod-0fff2b45-e898-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025067127s
STEP: Saw pod success
Aug 27 19:04:04.990: INFO: Pod "pod-0fff2b45-e898-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 19:04:04.993: INFO: Trying to get logs from node hunter-worker2 pod pod-0fff2b45-e898-11ea-b58c-0242ac11000b container test-container: 
STEP: delete the pod
Aug 27 19:04:05.061: INFO: Waiting for pod pod-0fff2b45-e898-11ea-b58c-0242ac11000b to disappear
Aug 27 19:04:05.081: INFO: Pod pod-0fff2b45-e898-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:04:05.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ghx6k" for this suite.
Aug 27 19:04:11.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:04:11.131: INFO: namespace: e2e-tests-emptydir-ghx6k, resource: bindings, ignored listing per whitelist
Aug 27 19:04:11.189: INFO: namespace e2e-tests-emptydir-ghx6k deletion completed in 6.105052294s

• [SLOW TEST:12.349 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:04:11.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 19:04:11.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1762922e-e898-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-kzzdb" to be "success or failure"
Aug 27 19:04:11.365: INFO: Pod "downwardapi-volume-1762922e-e898-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.997358ms
Aug 27 19:04:13.369: INFO: Pod "downwardapi-volume-1762922e-e898-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026812182s
Aug 27 19:04:15.372: INFO: Pod "downwardapi-volume-1762922e-e898-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02996378s
STEP: Saw pod success
Aug 27 19:04:15.372: INFO: Pod "downwardapi-volume-1762922e-e898-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 19:04:15.374: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1762922e-e898-11ea-b58c-0242ac11000b container client-container: 
STEP: delete the pod
Aug 27 19:04:15.421: INFO: Waiting for pod downwardapi-volume-1762922e-e898-11ea-b58c-0242ac11000b to disappear
Aug 27 19:04:15.687: INFO: Pod downwardapi-volume-1762922e-e898-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:04:15.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kzzdb" for this suite.
Aug 27 19:04:21.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:04:21.969: INFO: namespace: e2e-tests-projected-kzzdb, resource: bindings, ignored listing per whitelist
Aug 27 19:04:21.995: INFO: namespace e2e-tests-projected-kzzdb deletion completed in 6.303931331s

• [SLOW TEST:10.806 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:04:21.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-lz4gr in namespace e2e-tests-proxy-nl669
I0827 19:04:22.304885       6 runners.go:184] Created replication controller with name: proxy-service-lz4gr, namespace: e2e-tests-proxy-nl669, replica count: 1
I0827 19:04:23.355326       6 runners.go:184] proxy-service-lz4gr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 19:04:24.355603       6 runners.go:184] proxy-service-lz4gr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 19:04:25.355836       6 runners.go:184] proxy-service-lz4gr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 19:04:26.356015       6 runners.go:184] proxy-service-lz4gr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 19:04:27.356180       6 runners.go:184] proxy-service-lz4gr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 19:04:28.356410       6 runners.go:184] proxy-service-lz4gr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 19:04:29.356658       6 runners.go:184] proxy-service-lz4gr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 19:04:30.356987       6 runners.go:184] proxy-service-lz4gr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 19:04:31.357215       6 runners.go:184] proxy-service-lz4gr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 27 19:04:31.360: INFO: setup took 9.18237073s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 27 19:04:31.364: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-nl669/pods/http:proxy-service-lz4gr-g6f88:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 19:04:41.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:04:45.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-xwqt5" for this suite.
Aug 27 19:05:31.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:05:31.828: INFO: namespace: e2e-tests-pods-xwqt5, resource: bindings, ignored listing per whitelist
Aug 27 19:05:31.833: INFO: namespace e2e-tests-pods-xwqt5 deletion completed in 46.104703111s

• [SLOW TEST:50.261 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:05:31.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-7mj8k
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-7mj8k
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-7mj8k
Aug 27 19:05:32.013: INFO: Found 0 stateful pods, waiting for 1
Aug 27 19:05:42.017: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 27 19:05:42.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 27 19:05:42.311: INFO: stderr: "I0827 19:05:42.179077    2143 log.go:172] (0xc00013a840) (0xc000730640) Create stream\nI0827 19:05:42.179142    2143 log.go:172] (0xc00013a840) (0xc000730640) Stream added, broadcasting: 1\nI0827 19:05:42.181684    2143 log.go:172] (0xc00013a840) Reply frame received for 1\nI0827 19:05:42.181730    2143 log.go:172] (0xc00013a840) (0xc000534d20) Create stream\nI0827 19:05:42.181746    2143 log.go:172] (0xc00013a840) (0xc000534d20) Stream added, broadcasting: 3\nI0827 19:05:42.182677    2143 log.go:172] (0xc00013a840) Reply frame received for 3\nI0827 19:05:42.182729    2143 log.go:172] (0xc00013a840) (0xc00069a000) Create stream\nI0827 19:05:42.182756    2143 log.go:172] (0xc00013a840) (0xc00069a000) Stream added, broadcasting: 5\nI0827 19:05:42.183614    2143 log.go:172] (0xc00013a840) Reply frame received for 5\nI0827 19:05:42.298391    2143 log.go:172] (0xc00013a840) Data frame received for 5\nI0827 19:05:42.298459    2143 log.go:172] (0xc00069a000) (5) Data frame handling\nI0827 19:05:42.298501    2143 log.go:172] (0xc00013a840) Data frame received for 3\nI0827 19:05:42.298521    2143 log.go:172] (0xc000534d20) (3) Data frame handling\nI0827 19:05:42.298554    2143 log.go:172] (0xc000534d20) (3) Data frame sent\nI0827 19:05:42.298596    2143 log.go:172] (0xc00013a840) Data frame received for 3\nI0827 19:05:42.298626    2143 log.go:172] (0xc000534d20) (3) Data frame handling\nI0827 19:05:42.301100    2143 log.go:172] (0xc00013a840) Data frame received for 1\nI0827 19:05:42.301137    2143 log.go:172] (0xc000730640) (1) Data frame handling\nI0827 19:05:42.301152    2143 log.go:172] (0xc000730640) (1) Data frame sent\nI0827 19:05:42.301176    2143 log.go:172] (0xc00013a840) (0xc000730640) Stream removed, broadcasting: 1\nI0827 19:05:42.301376    2143 log.go:172] (0xc00013a840) (0xc000730640) Stream removed, broadcasting: 1\nI0827 19:05:42.301398    2143 log.go:172] (0xc00013a840) (0xc000534d20) Stream removed, broadcasting: 3\nI0827 19:05:42.301415    2143 log.go:172] (0xc00013a840) (0xc00069a000) Stream removed, broadcasting: 5\n"
Aug 27 19:05:42.311: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 27 19:05:42.311: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 27 19:05:42.314: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 27 19:05:52.319: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 19:05:52.319: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 19:05:52.364: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 27 19:05:52.364: INFO: ss-0  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  }]
Aug 27 19:05:52.364: INFO: 
Aug 27 19:05:52.364: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 27 19:05:53.402: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.964143382s
Aug 27 19:05:54.406: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.926253811s
Aug 27 19:05:55.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.922188904s
Aug 27 19:05:56.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.917311039s
Aug 27 19:05:57.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.903075159s
Aug 27 19:05:58.438: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.894670699s
Aug 27 19:05:59.444: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.890195858s
Aug 27 19:06:00.449: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.884716207s
Aug 27 19:06:01.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 878.977417ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-7mj8k
Aug 27 19:06:02.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:06:02.639: INFO: stderr: "I0827 19:06:02.573350    2165 log.go:172] (0xc0001546e0) (0xc00070a780) Create stream\nI0827 19:06:02.573395    2165 log.go:172] (0xc0001546e0) (0xc00070a780) Stream added, broadcasting: 1\nI0827 19:06:02.574961    2165 log.go:172] (0xc0001546e0) Reply frame received for 1\nI0827 19:06:02.575005    2165 log.go:172] (0xc0001546e0) (0xc00070a820) Create stream\nI0827 19:06:02.575014    2165 log.go:172] (0xc0001546e0) (0xc00070a820) Stream added, broadcasting: 3\nI0827 19:06:02.575717    2165 log.go:172] (0xc0001546e0) Reply frame received for 3\nI0827 19:06:02.575742    2165 log.go:172] (0xc0001546e0) (0xc00070a8c0) Create stream\nI0827 19:06:02.575749    2165 log.go:172] (0xc0001546e0) (0xc00070a8c0) Stream added, broadcasting: 5\nI0827 19:06:02.576385    2165 log.go:172] (0xc0001546e0) Reply frame received for 5\nI0827 19:06:02.628573    2165 log.go:172] (0xc0001546e0) Data frame received for 3\nI0827 19:06:02.628625    2165 log.go:172] (0xc00070a820) (3) Data frame handling\nI0827 19:06:02.628642    2165 log.go:172] (0xc00070a820) (3) Data frame sent\nI0827 19:06:02.628654    2165 log.go:172] (0xc0001546e0) Data frame received for 3\nI0827 19:06:02.628675    2165 log.go:172] (0xc00070a820) (3) Data frame handling\nI0827 19:06:02.628693    2165 log.go:172] (0xc0001546e0) Data frame received for 5\nI0827 19:06:02.628704    2165 log.go:172] (0xc00070a8c0) (5) Data frame handling\nI0827 19:06:02.630695    2165 log.go:172] (0xc0001546e0) Data frame received for 1\nI0827 19:06:02.630712    2165 log.go:172] (0xc00070a780) (1) Data frame handling\nI0827 19:06:02.630731    2165 log.go:172] (0xc00070a780) (1) Data frame sent\nI0827 19:06:02.630806    2165 log.go:172] (0xc0001546e0) (0xc00070a780) Stream removed, broadcasting: 1\nI0827 19:06:02.630905    2165 log.go:172] (0xc0001546e0) Go away received\nI0827 19:06:02.630966    2165 log.go:172] (0xc0001546e0) (0xc00070a780) Stream removed, broadcasting: 1\nI0827 19:06:02.630979    2165 log.go:172] (0xc0001546e0) (0xc00070a820) Stream removed, broadcasting: 3\nI0827 19:06:02.630985    2165 log.go:172] (0xc0001546e0) (0xc00070a8c0) Stream removed, broadcasting: 5\n"
Aug 27 19:06:02.639: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 27 19:06:02.639: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 27 19:06:02.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:06:02.823: INFO: stderr: "I0827 19:06:02.758383    2187 log.go:172] (0xc000138790) (0xc00069b540) Create stream\nI0827 19:06:02.758466    2187 log.go:172] (0xc000138790) (0xc00069b540) Stream added, broadcasting: 1\nI0827 19:06:02.761049    2187 log.go:172] (0xc000138790) Reply frame received for 1\nI0827 19:06:02.761100    2187 log.go:172] (0xc000138790) (0xc00031e000) Create stream\nI0827 19:06:02.761115    2187 log.go:172] (0xc000138790) (0xc00031e000) Stream added, broadcasting: 3\nI0827 19:06:02.761930    2187 log.go:172] (0xc000138790) Reply frame received for 3\nI0827 19:06:02.761949    2187 log.go:172] (0xc000138790) (0xc00069b5e0) Create stream\nI0827 19:06:02.761956    2187 log.go:172] (0xc000138790) (0xc00069b5e0) Stream added, broadcasting: 5\nI0827 19:06:02.762693    2187 log.go:172] (0xc000138790) Reply frame received for 5\nI0827 19:06:02.816326    2187 log.go:172] (0xc000138790) Data frame received for 5\nI0827 19:06:02.816368    2187 log.go:172] (0xc00069b5e0) (5) Data frame handling\nI0827 19:06:02.816381    2187 log.go:172] (0xc00069b5e0) (5) Data frame sent\nI0827 19:06:02.816398    2187 log.go:172] (0xc000138790) Data frame received for 5\nI0827 19:06:02.816416    2187 log.go:172] (0xc00069b5e0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0827 19:06:02.816448    2187 log.go:172] (0xc000138790) Data frame received for 3\nI0827 19:06:02.816470    2187 log.go:172] (0xc00031e000) (3) Data frame handling\nI0827 19:06:02.816493    2187 log.go:172] (0xc00031e000) (3) Data frame sent\nI0827 19:06:02.816510    2187 log.go:172] (0xc000138790) Data frame received for 3\nI0827 19:06:02.816518    2187 log.go:172] (0xc00031e000) (3) Data frame handling\nI0827 19:06:02.818174    2187 log.go:172] (0xc000138790) Data frame received for 1\nI0827 19:06:02.818232    2187 log.go:172] (0xc00069b540) (1) Data frame handling\nI0827 19:06:02.818253    2187 log.go:172] (0xc00069b540) (1) Data frame sent\nI0827 19:06:02.818270    2187 log.go:172] (0xc000138790) (0xc00069b540) Stream removed, broadcasting: 1\nI0827 19:06:02.818341    2187 log.go:172] (0xc000138790) Go away received\nI0827 19:06:02.818473    2187 log.go:172] (0xc000138790) (0xc00069b540) Stream removed, broadcasting: 1\nI0827 19:06:02.818489    2187 log.go:172] (0xc000138790) (0xc00031e000) Stream removed, broadcasting: 3\nI0827 19:06:02.818498    2187 log.go:172] (0xc000138790) (0xc00069b5e0) Stream removed, broadcasting: 5\n"
Aug 27 19:06:02.823: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 27 19:06:02.823: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 27 19:06:02.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:06:03.059: INFO: stderr: "I0827 19:06:02.941903    2210 log.go:172] (0xc000138630) (0xc0005ca5a0) Create stream\nI0827 19:06:02.941984    2210 log.go:172] (0xc000138630) (0xc0005ca5a0) Stream added, broadcasting: 1\nI0827 19:06:02.944335    2210 log.go:172] (0xc000138630) Reply frame received for 1\nI0827 19:06:02.944395    2210 log.go:172] (0xc000138630) (0xc0005e4000) Create stream\nI0827 19:06:02.944414    2210 log.go:172] (0xc000138630) (0xc0005e4000) Stream added, broadcasting: 3\nI0827 19:06:02.945457    2210 log.go:172] (0xc000138630) Reply frame received for 3\nI0827 19:06:02.945483    2210 log.go:172] (0xc000138630) (0xc0005e4140) Create stream\nI0827 19:06:02.945497    2210 log.go:172] (0xc000138630) (0xc0005e4140) Stream added, broadcasting: 5\nI0827 19:06:02.946317    2210 log.go:172] (0xc000138630) Reply frame received for 5\nI0827 19:06:03.045907    2210 log.go:172] (0xc000138630) Data frame received for 5\nI0827 19:06:03.046000    2210 log.go:172] (0xc0005e4140) (5) Data frame handling\nI0827 19:06:03.046033    2210 log.go:172] (0xc0005e4140) (5) Data frame sent\nI0827 19:06:03.046077    2210 log.go:172] (0xc000138630) Data frame received for 5\nI0827 19:06:03.046109    2210 log.go:172] (0xc0005e4140) (5) Data frame handling\nI0827 19:06:03.046146    2210 log.go:172] (0xc000138630) Data frame received for 3\nI0827 19:06:03.046175    2210 log.go:172] (0xc0005e4000) (3) Data frame handling\nI0827 19:06:03.046203    2210 log.go:172] (0xc0005e4000) (3) Data frame sent\nI0827 19:06:03.046230    2210 log.go:172] (0xc000138630) Data frame received for 3\nI0827 19:06:03.046261    2210 log.go:172] (0xc0005e4000) (3) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0827 19:06:03.047011    2210 log.go:172] (0xc000138630) Data frame received for 1\nI0827 19:06:03.047060    2210 log.go:172] (0xc0005ca5a0) (1) Data frame handling\nI0827 19:06:03.047096    2210 log.go:172] (0xc0005ca5a0) (1) Data frame sent\nI0827 19:06:03.047127    2210 log.go:172] (0xc000138630) (0xc0005ca5a0) Stream removed, broadcasting: 1\nI0827 19:06:03.047163    2210 log.go:172] (0xc000138630) Go away received\nI0827 19:06:03.047326    2210 log.go:172] (0xc000138630) (0xc0005ca5a0) Stream removed, broadcasting: 1\nI0827 19:06:03.047345    2210 log.go:172] (0xc000138630) (0xc0005e4000) Stream removed, broadcasting: 3\nI0827 19:06:03.047352    2210 log.go:172] (0xc000138630) (0xc0005e4140) Stream removed, broadcasting: 5\n"
Aug 27 19:06:03.059: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 27 19:06:03.059: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 27 19:06:03.066: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 19:06:03.066: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 19:06:03.066: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 27 19:06:03.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 27 19:06:03.350: INFO: stderr: "I0827 19:06:03.204445    2233 log.go:172] (0xc00014c840) (0xc000639360) Create stream\nI0827 19:06:03.204517    2233 log.go:172] (0xc00014c840) (0xc000639360) Stream added, broadcasting: 1\nI0827 19:06:03.206880    2233 log.go:172] (0xc00014c840) Reply frame received for 1\nI0827 19:06:03.206922    2233 log.go:172] (0xc00014c840) (0xc000639400) Create stream\nI0827 19:06:03.206937    2233 log.go:172] (0xc00014c840) (0xc000639400) Stream added, broadcasting: 3\nI0827 19:06:03.207734    2233 log.go:172] (0xc00014c840) Reply frame received for 3\nI0827 19:06:03.207774    2233 log.go:172] (0xc00014c840) (0xc0006394a0) Create stream\nI0827 19:06:03.207790    2233 log.go:172] (0xc00014c840) (0xc0006394a0) Stream added, broadcasting: 5\nI0827 19:06:03.208506    2233 log.go:172] (0xc00014c840) Reply frame received for 5\nI0827 19:06:03.342604    2233 log.go:172] (0xc00014c840) Data frame received for 3\nI0827 19:06:03.342647    2233 log.go:172] (0xc000639400) (3) Data frame handling\nI0827 19:06:03.342672    2233 log.go:172] (0xc00014c840) Data frame received for 5\nI0827 19:06:03.342701    2233 log.go:172] (0xc0006394a0) (5) Data frame handling\nI0827 19:06:03.342769    2233 log.go:172] (0xc000639400) (3) Data frame sent\nI0827 19:06:03.342817    2233 log.go:172] (0xc00014c840) Data frame received for 3\nI0827 19:06:03.342827    2233 log.go:172] (0xc000639400) (3) Data frame handling\nI0827 19:06:03.344256    2233 log.go:172] (0xc00014c840) Data frame received for 1\nI0827 19:06:03.344274    2233 log.go:172] (0xc000639360) (1) Data frame handling\nI0827 19:06:03.344284    2233 log.go:172] (0xc000639360) (1) Data frame sent\nI0827 19:06:03.344302    2233 log.go:172] (0xc00014c840) (0xc000639360) Stream removed, broadcasting: 1\nI0827 19:06:03.344316    2233 log.go:172] (0xc00014c840) Go away received\nI0827 19:06:03.344486    2233 log.go:172] (0xc00014c840) (0xc000639360) Stream removed, broadcasting: 1\nI0827 19:06:03.344512    2233 log.go:172] (0xc00014c840) (0xc000639400) Stream removed, broadcasting: 3\nI0827 19:06:03.344526    2233 log.go:172] (0xc00014c840) (0xc0006394a0) Stream removed, broadcasting: 5\n"
Aug 27 19:06:03.351: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 27 19:06:03.351: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 27 19:06:03.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 27 19:06:03.666: INFO: stderr: "I0827 19:06:03.541965    2255 log.go:172] (0xc000138630) (0xc00071c640) Create stream\nI0827 19:06:03.542053    2255 log.go:172] (0xc000138630) (0xc00071c640) Stream added, broadcasting: 1\nI0827 19:06:03.544417    2255 log.go:172] (0xc000138630) Reply frame received for 1\nI0827 19:06:03.544457    2255 log.go:172] (0xc000138630) (0xc000686c80) Create stream\nI0827 19:06:03.544477    2255 log.go:172] (0xc000138630) (0xc000686c80) Stream added, broadcasting: 3\nI0827 19:06:03.545322    2255 log.go:172] (0xc000138630) Reply frame received for 3\nI0827 19:06:03.545376    2255 log.go:172] (0xc000138630) (0xc0006c6000) Create stream\nI0827 19:06:03.545393    2255 log.go:172] (0xc000138630) (0xc0006c6000) Stream added, broadcasting: 5\nI0827 19:06:03.545977    2255 log.go:172] (0xc000138630) Reply frame received for 5\nI0827 19:06:03.654500    2255 log.go:172] (0xc000138630) Data frame received for 3\nI0827 19:06:03.654536    2255 log.go:172] (0xc000686c80) (3) Data frame handling\nI0827 19:06:03.654557    2255 log.go:172] (0xc000686c80) (3) Data frame sent\nI0827 19:06:03.654790    2255 log.go:172] (0xc000138630) Data frame received for 3\nI0827 19:06:03.654809    2255 log.go:172] (0xc000686c80) (3) Data frame handling\nI0827 19:06:03.654834    2255 log.go:172] (0xc000138630) Data frame received for 5\nI0827 19:06:03.654864    2255 log.go:172] (0xc0006c6000) (5) Data frame handling\nI0827 19:06:03.656210    2255 log.go:172] (0xc000138630) Data frame received for 1\nI0827 19:06:03.656232    2255 log.go:172] (0xc00071c640) (1) Data frame handling\nI0827 19:06:03.656243    2255 log.go:172] (0xc00071c640) (1) Data frame sent\nI0827 19:06:03.656251    2255 log.go:172] (0xc000138630) (0xc00071c640) Stream removed, broadcasting: 1\nI0827 19:06:03.656265    2255 log.go:172] (0xc000138630) Go away received\nI0827 19:06:03.656514    2255 log.go:172] (0xc000138630) (0xc00071c640) Stream removed, broadcasting: 1\nI0827 19:06:03.656534    2255 log.go:172] (0xc000138630) (0xc000686c80) Stream removed, broadcasting: 3\nI0827 19:06:03.656546    2255 log.go:172] (0xc000138630) (0xc0006c6000) Stream removed, broadcasting: 5\n"
Aug 27 19:06:03.666: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 27 19:06:03.666: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 27 19:06:03.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 27 19:06:04.002: INFO: stderr: "I0827 19:06:03.774028    2278 log.go:172] (0xc000138840) (0xc000121360) Create stream\nI0827 19:06:03.774076    2278 log.go:172] (0xc000138840) (0xc000121360) Stream added, broadcasting: 1\nI0827 19:06:03.776403    2278 log.go:172] (0xc000138840) Reply frame received for 1\nI0827 19:06:03.776432    2278 log.go:172] (0xc000138840) (0xc00073a000) Create stream\nI0827 19:06:03.776446    2278 log.go:172] (0xc000138840) (0xc00073a000) Stream added, broadcasting: 3\nI0827 19:06:03.777614    2278 log.go:172] (0xc000138840) Reply frame received for 3\nI0827 19:06:03.777689    2278 log.go:172] (0xc000138840) (0xc000510000) Create stream\nI0827 19:06:03.777725    2278 log.go:172] (0xc000138840) (0xc000510000) Stream added, broadcasting: 5\nI0827 19:06:03.778588    2278 log.go:172] (0xc000138840) Reply frame received for 5\nI0827 19:06:03.984565    2278 log.go:172] (0xc000138840) Data frame received for 3\nI0827 19:06:03.984588    2278 log.go:172] (0xc00073a000) (3) Data frame handling\nI0827 19:06:03.984628    2278 log.go:172] (0xc00073a000) (3) Data frame sent\nI0827 19:06:03.984684    2278 log.go:172] (0xc000138840) Data frame received for 5\nI0827 19:06:03.984706    2278 log.go:172] (0xc000510000) (5) Data frame handling\nI0827 19:06:03.985069    2278 log.go:172] (0xc000138840) Data frame received for 3\nI0827 19:06:03.985102    2278 log.go:172] (0xc00073a000) (3) Data frame handling\nI0827 19:06:03.988152    2278 log.go:172] (0xc000138840) Data frame received for 1\nI0827 19:06:03.988175    2278 log.go:172] (0xc000121360) (1) Data frame handling\nI0827 19:06:03.988186    2278 log.go:172] (0xc000121360) (1) Data frame sent\nI0827 19:06:03.988207    2278 log.go:172] (0xc000138840) (0xc000121360) Stream removed, broadcasting: 1\nI0827 19:06:03.988308    2278 log.go:172] (0xc000138840) Go away received\nI0827 19:06:03.988463    2278 log.go:172] (0xc000138840) (0xc000121360) Stream removed, broadcasting: 1\nI0827 19:06:03.988486    2278 log.go:172] (0xc000138840) (0xc00073a000) Stream removed, broadcasting: 3\nI0827 19:06:03.988497    2278 log.go:172] (0xc000138840) (0xc000510000) Stream removed, broadcasting: 5\n"
Aug 27 19:06:04.003: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 27 19:06:04.003: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 27 19:06:04.003: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 19:06:04.005: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Aug 27 19:06:14.013: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 19:06:14.013: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 19:06:14.013: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 19:06:14.048: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 27 19:06:14.048: INFO: ss-0  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  }]
Aug 27 19:06:14.048: INFO: ss-1  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  }]
Aug 27 19:06:14.048: INFO: ss-2  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  }]
Aug 27 19:06:14.048: INFO: 
Aug 27 19:06:14.048: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 27 19:06:15.264: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 27 19:06:15.264: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  }]
Aug 27 19:06:15.264: INFO: ss-1  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  }]
Aug 27 19:06:15.264: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  }]
Aug 27 19:06:15.264: INFO: 
Aug 27 19:06:15.264: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 27 19:06:16.460: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 27 19:06:16.460: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  }]
Aug 27 19:06:16.460: INFO: ss-1  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  }]
Aug 27 19:06:16.460: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  }]
Aug 27 19:06:16.460: INFO: 
Aug 27 19:06:16.460: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 27 19:06:17.480: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 27 19:06:17.480: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  }]
Aug 27 19:06:17.480: INFO: ss-1  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  }]
Aug 27 19:06:17.480: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  }]
Aug 27 19:06:17.480: INFO: 
Aug 27 19:06:17.480: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 27 19:06:18.512: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 27 19:06:18.512: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  }]
Aug 27 19:06:18.512: INFO: ss-1  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  }]
Aug 27 19:06:18.512: INFO: ss-2  hunter-worker   Pending  0s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:52 +0000 UTC  }]
Aug 27 19:06:18.512: INFO: 
Aug 27 19:06:18.512: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 27 19:06:19.518: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 27 19:06:19.518: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  }]
Aug 27 19:06:19.518: INFO: 
Aug 27 19:06:19.518: INFO: StatefulSet ss has not reached scale 0, at 1
Aug 27 19:06:20.522: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 27 19:06:20.522: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  }]
Aug 27 19:06:20.522: INFO: 
Aug 27 19:06:20.522: INFO: StatefulSet ss has not reached scale 0, at 1
Aug 27 19:06:21.526: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 27 19:06:21.526: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  }]
Aug 27 19:06:21.526: INFO: 
Aug 27 19:06:21.526: INFO: StatefulSet ss has not reached scale 0, at 1
Aug 27 19:06:22.531: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 27 19:06:22.531: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  }]
Aug 27 19:06:22.531: INFO: 
Aug 27 19:06:22.531: INFO: StatefulSet ss has not reached scale 0, at 1
Aug 27 19:06:23.536: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 27 19:06:23.536: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:06:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:05:32 +0000 UTC  }]
Aug 27 19:06:23.536: INFO: 
Aug 27 19:06:23.536: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-7mj8k
Aug 27 19:06:24.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:06:24.662: INFO: rc: 1
Aug 27 19:06:24.662: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001f74930 exit status 1   true [0xc00208a5c8 0xc00208a5e0 0xc00208a5f8] [0xc00208a5c8 0xc00208a5e0 0xc00208a5f8] [0xc00208a5d8 0xc00208a5f0] [0x935700 0x935700] 0xc001c6c780 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Aug 27 19:06:34.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:06:34.748: INFO: rc: 1
Aug 27 19:06:34.748: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f74a50 exit status 1   true [0xc00208a600 0xc00208a618 0xc00208a630] [0xc00208a600 0xc00208a618 0xc00208a630] [0xc00208a610 0xc00208a628] [0x935700 0x935700] 0xc001c6d260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:06:44.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:06:44.849: INFO: rc: 1
Aug 27 19:06:44.849: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002734180 exit status 1   true [0xc00208a000 0xc00208a018 0xc00208a030] [0xc00208a000 0xc00208a018 0xc00208a030] [0xc00208a010 0xc00208a028] [0x935700 0x935700] 0xc0020f21e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:06:54.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:06:54.944: INFO: rc: 1
Aug 27 19:06:54.944: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016221b0 exit status 1   true [0xc00041e068 0xc00041e158 0xc00041e330] [0xc00041e068 0xc00041e158 0xc00041e330] [0xc00041e120 0xc00041e2a0] [0x935700 0x935700] 0xc001bb82a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:07:04.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:07:05.032: INFO: rc: 1
Aug 27 19:07:05.032: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b601b0 exit status 1   true [0xc0003fc098 0xc0003fc298 0xc0003fc368] [0xc0003fc098 0xc0003fc298 0xc0003fc368] [0xc0003fc268 0xc0003fc318] [0x935700 0x935700] 0xc00193a300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:07:15.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:07:15.128: INFO: rc: 1
Aug 27 19:07:15.128: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b60330 exit status 1   true [0xc0003fc3d8 0xc0003fc4a0 0xc0003fc5b8] [0xc0003fc3d8 0xc0003fc4a0 0xc0003fc5b8] [0xc0003fc460 0xc0003fc598] [0x935700 0x935700] 0xc00193a600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:07:25.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:07:25.218: INFO: rc: 1
Aug 27 19:07:25.218: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b60450 exit status 1   true [0xc0003fc5e0 0xc0003fc620 0xc0003fc738] [0xc0003fc5e0 0xc0003fc620 0xc0003fc738] [0xc0003fc600 0xc0003fc690] [0x935700 0x935700] 0xc00193aae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:07:35.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:07:35.305: INFO: rc: 1
Aug 27 19:07:35.305: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002734300 exit status 1   true [0xc00208a038 0xc00208a050 0xc00208a068] [0xc00208a038 0xc00208a050 0xc00208a068] [0xc00208a048 0xc00208a060] [0x935700 0x935700] 0xc0020f2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:07:45.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:07:45.390: INFO: rc: 1
Aug 27 19:07:45.390: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002734480 exit status 1   true [0xc00208a070 0xc00208a088 0xc00208a0a0] [0xc00208a070 0xc00208a088 0xc00208a0a0] [0xc00208a080 0xc00208a098] [0x935700 0x935700] 0xc0020f2780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:07:55.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:07:55.485: INFO: rc: 1
Aug 27 19:07:55.485: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d50180 exit status 1   true [0xc000c5e000 0xc000c5e018 0xc000c5e030] [0xc000c5e000 0xc000c5e018 0xc000c5e030] [0xc000c5e010 0xc000c5e028] [0x935700 0x935700] 0xc001ea76e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:08:05.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:08:05.584: INFO: rc: 1
Aug 27 19:08:05.584: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002734600 exit status 1   true [0xc00208a0a8 0xc00208a0c0 0xc00208a0d8] [0xc00208a0a8 0xc00208a0c0 0xc00208a0d8] [0xc00208a0b8 0xc00208a0d0] [0x935700 0x935700] 0xc002180600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:08:15.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:08:16.015: INFO: rc: 1
Aug 27 19:08:16.015: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002734750 exit status 1   true [0xc00208a0e0 0xc00208a0f8 0xc00208a110] [0xc00208a0e0 0xc00208a0f8 0xc00208a110] [0xc00208a0f0 0xc00208a108] [0x935700 0x935700] 0xc002180a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:08:26.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:08:26.122: INFO: rc: 1
Aug 27 19:08:26.122: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002734870 exit status 1   true [0xc00208a118 0xc00208a130 0xc00208a148] [0xc00208a118 0xc00208a130 0xc00208a148] [0xc00208a128 0xc00208a140] [0x935700 0x935700] 0xc002180ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:08:36.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:08:36.200: INFO: rc: 1
Aug 27 19:08:36.200: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d502d0 exit status 1   true [0xc000c5e038 0xc000c5e050 0xc000c5e068] [0xc000c5e038 0xc000c5e050 0xc000c5e068] [0xc000c5e048 0xc000c5e060] [0x935700 0x935700] 0xc001ea7980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:08:46.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:08:46.294: INFO: rc: 1
Aug 27 19:08:46.294: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0027349c0 exit status 1   true [0xc00208a158 0xc00208a170 0xc00208a188] [0xc00208a158 0xc00208a170 0xc00208a188] [0xc00208a168 0xc00208a180] [0x935700 0x935700] 0xc0021814a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:08:56.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:08:56.405: INFO: rc: 1
Aug 27 19:08:56.405: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b60180 exit status 1   true [0xc00016e000 0xc00041e120 0xc00041e2a0] [0xc00016e000 0xc00041e120 0xc00041e2a0] [0xc00041e118 0xc00041e1d0] [0x935700 0x935700] 0xc0021808a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:09:06.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:09:06.501: INFO: rc: 1
Aug 27 19:09:06.501: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016221e0 exit status 1   true [0xc0003fc098 0xc0003fc298 0xc0003fc368] [0xc0003fc098 0xc0003fc298 0xc0003fc368] [0xc0003fc268 0xc0003fc318] [0x935700 0x935700] 0xc001ea76e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:09:16.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:09:16.590: INFO: rc: 1
Aug 27 19:09:16.590: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b60360 exit status 1   true [0xc00041e330 0xc00041e370 0xc00041e3c0] [0xc00041e330 0xc00041e370 0xc00041e3c0] [0xc00041e350 0xc00041e3a0] [0x935700 0x935700] 0xc002180c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:09:26.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:09:26.689: INFO: rc: 1
Aug 27 19:09:26.690: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001622330 exit status 1   true [0xc0003fc3d8 0xc0003fc4a0 0xc0003fc5b8] [0xc0003fc3d8 0xc0003fc4a0 0xc0003fc5b8] [0xc0003fc460 0xc0003fc598] [0x935700 0x935700] 0xc001ea7980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:09:36.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:09:36.790: INFO: rc: 1
Aug 27 19:09:36.790: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0027341e0 exit status 1   true [0xc000c5e000 0xc000c5e018 0xc000c5e030] [0xc000c5e000 0xc000c5e018 0xc000c5e030] [0xc000c5e010 0xc000c5e028] [0x935700 0x935700] 0xc001bb8240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:09:46.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:09:46.883: INFO: rc: 1
Aug 27 19:09:46.883: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d50120 exit status 1   true [0xc00208a000 0xc00208a018 0xc00208a030] [0xc00208a000 0xc00208a018 0xc00208a030] [0xc00208a010 0xc00208a028] [0x935700 0x935700] 0xc00193a300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:09:56.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:09:56.969: INFO: rc: 1
Aug 27 19:09:56.969: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016225a0 exit status 1   true [0xc0003fc5e0 0xc0003fc620 0xc0003fc738] [0xc0003fc5e0 0xc0003fc620 0xc0003fc738] [0xc0003fc600 0xc0003fc690] [0x935700 0x935700] 0xc0020f22a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:10:06.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:10:07.067: INFO: rc: 1
Aug 27 19:10:07.068: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002734360 exit status 1   true [0xc000c5e038 0xc000c5e050 0xc000c5e068] [0xc000c5e038 0xc000c5e050 0xc000c5e068] [0xc000c5e048 0xc000c5e060] [0x935700 0x935700] 0xc001bb85a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:10:17.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:10:17.160: INFO: rc: 1
Aug 27 19:10:17.160: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002734510 exit status 1   true [0xc000c5e070 0xc000c5e088 0xc000c5e0a0] [0xc000c5e070 0xc000c5e088 0xc000c5e0a0] [0xc000c5e080 0xc000c5e098] [0x935700 0x935700] 0xc001bb8840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:10:27.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:10:27.267: INFO: rc: 1
Aug 27 19:10:27.267: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001622780 exit status 1   true [0xc0003fc780 0xc0003fc7b8 0xc0003fc7d8] [0xc0003fc780 0xc0003fc7b8 0xc0003fc7d8] [0xc0003fc7b0 0xc0003fc7c8] [0x935700 0x935700] 0xc0020f2540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:10:37.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:10:37.366: INFO: rc: 1
Aug 27 19:10:37.366: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001622900 exit status 1   true [0xc0003fc818 0xc0003fc9d0 0xc0003fca48] [0xc0003fc818 0xc0003fc9d0 0xc0003fca48] [0xc0003fc918 0xc0003fca40] [0x935700 0x935700] 0xc0020f2840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:10:47.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:10:47.464: INFO: rc: 1
Aug 27 19:10:47.464: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bca1e0 exit status 1   true [0xc001366008 0xc001366020 0xc001366038] [0xc001366008 0xc001366020 0xc001366038] [0xc001366018 0xc001366030] [0x935700 0x935700] 0xc001d3c5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:10:57.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:10:57.573: INFO: rc: 1
Aug 27 19:10:57.573: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d50150 exit status 1   true [0xc00016e000 0xc001366050 0xc001366068] [0xc00016e000 0xc001366050 0xc001366068] [0xc001366048 0xc001366060] [0x935700 0x935700] 0xc001ea7620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:11:07.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:11:07.710: INFO: rc: 1
Aug 27 19:11:07.710: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016221b0 exit status 1   true [0xc00208a000 0xc00208a018 0xc00208a030] [0xc00208a000 0xc00208a018 0xc00208a030] [0xc00208a010 0xc00208a028] [0x935700 0x935700] 0xc001d3da40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:11:17.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:11:17.796: INFO: rc: 1
Aug 27 19:11:17.796: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d502a0 exit status 1   true [0xc001366070 0xc001366088 0xc0013660a0] [0xc001366070 0xc001366088 0xc0013660a0] [0xc001366080 0xc001366098] [0x935700 0x935700] 0xc001ea7920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 27 19:11:27.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7mj8k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:11:27.911: INFO: rc: 1
Aug 27 19:11:27.911: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Aug 27 19:11:27.911: INFO: Scaling statefulset ss to 0
Aug 27 19:11:27.918: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug 27 19:11:27.920: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7mj8k
Aug 27 19:11:27.922: INFO: Scaling statefulset ss to 0
Aug 27 19:11:27.929: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 19:11:27.932: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:11:27.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-7mj8k" for this suite.
Aug 27 19:11:34.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:11:34.110: INFO: namespace: e2e-tests-statefulset-7mj8k, resource: bindings, ignored listing per whitelist
Aug 27 19:11:34.126: INFO: namespace e2e-tests-statefulset-7mj8k deletion completed in 6.176246186s

• [SLOW TEST:362.293 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:11:34.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 27 19:11:34.300: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:11:34.303: INFO: Number of nodes with available pods: 0
Aug 27 19:11:34.303: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:11:35.308: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:11:35.312: INFO: Number of nodes with available pods: 0
Aug 27 19:11:35.312: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:11:36.495: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:11:36.498: INFO: Number of nodes with available pods: 0
Aug 27 19:11:36.498: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:11:37.334: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:11:37.337: INFO: Number of nodes with available pods: 0
Aug 27 19:11:37.337: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:11:38.308: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:11:38.312: INFO: Number of nodes with available pods: 0
Aug 27 19:11:38.312: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:11:39.315: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:11:39.317: INFO: Number of nodes with available pods: 2
Aug 27 19:11:39.317: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 27 19:11:39.356: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:11:39.401: INFO: Number of nodes with available pods: 2
Aug 27 19:11:39.401: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-hvvnp, will wait for the garbage collector to delete the pods
Aug 27 19:11:40.559: INFO: Deleting DaemonSet.extensions daemon-set took: 5.962934ms
Aug 27 19:11:40.659: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.24999ms
Aug 27 19:11:48.464: INFO: Number of nodes with available pods: 0
Aug 27 19:11:48.464: INFO: Number of running nodes: 0, number of available pods: 0
Aug 27 19:11:48.466: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hvvnp/daemonsets","resourceVersion":"2706619"},"items":null}

Aug 27 19:11:48.468: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hvvnp/pods","resourceVersion":"2706619"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:11:48.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-hvvnp" for this suite.
Aug 27 19:11:54.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:11:54.516: INFO: namespace: e2e-tests-daemonsets-hvvnp, resource: bindings, ignored listing per whitelist
Aug 27 19:11:54.582: INFO: namespace e2e-tests-daemonsets-hvvnp deletion completed in 6.100520156s

• [SLOW TEST:20.455 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:11:54.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Aug 27 19:11:54.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:11:57.470: INFO: stderr: ""
Aug 27 19:11:57.470: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 19:11:57.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:11:57.578: INFO: stderr: ""
Aug 27 19:11:57.578: INFO: stdout: "update-demo-nautilus-5j7l5 update-demo-nautilus-tltmt "
Aug 27 19:11:57.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j7l5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:11:57.713: INFO: stderr: ""
Aug 27 19:11:57.713: INFO: stdout: ""
Aug 27 19:11:57.713: INFO: update-demo-nautilus-5j7l5 is created but not running
Aug 27 19:12:02.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:02.827: INFO: stderr: ""
Aug 27 19:12:02.827: INFO: stdout: "update-demo-nautilus-5j7l5 update-demo-nautilus-tltmt "
Aug 27 19:12:02.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j7l5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:02.949: INFO: stderr: ""
Aug 27 19:12:02.949: INFO: stdout: "true"
Aug 27 19:12:02.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j7l5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:03.041: INFO: stderr: ""
Aug 27 19:12:03.041: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 19:12:03.041: INFO: validating pod update-demo-nautilus-5j7l5
Aug 27 19:12:03.045: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 19:12:03.045: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 19:12:03.045: INFO: update-demo-nautilus-5j7l5 is verified up and running
Aug 27 19:12:03.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tltmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:03.157: INFO: stderr: ""
Aug 27 19:12:03.157: INFO: stdout: "true"
Aug 27 19:12:03.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tltmt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:03.269: INFO: stderr: ""
Aug 27 19:12:03.269: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 19:12:03.269: INFO: validating pod update-demo-nautilus-tltmt
Aug 27 19:12:03.272: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 19:12:03.272: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 19:12:03.272: INFO: update-demo-nautilus-tltmt is verified up and running
STEP: scaling down the replication controller
Aug 27 19:12:03.274: INFO: scanned /root for discovery docs: 
Aug 27 19:12:03.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:04.449: INFO: stderr: ""
Aug 27 19:12:04.449: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 19:12:04.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:04.556: INFO: stderr: ""
Aug 27 19:12:04.556: INFO: stdout: "update-demo-nautilus-5j7l5 update-demo-nautilus-tltmt "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 27 19:12:09.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:09.665: INFO: stderr: ""
Aug 27 19:12:09.665: INFO: stdout: "update-demo-nautilus-5j7l5 update-demo-nautilus-tltmt "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 27 19:12:14.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:14.783: INFO: stderr: ""
Aug 27 19:12:14.783: INFO: stdout: "update-demo-nautilus-5j7l5 update-demo-nautilus-tltmt "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 27 19:12:19.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:19.885: INFO: stderr: ""
Aug 27 19:12:19.885: INFO: stdout: "update-demo-nautilus-tltmt "
Aug 27 19:12:19.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tltmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:19.988: INFO: stderr: ""
Aug 27 19:12:19.988: INFO: stdout: "true"
Aug 27 19:12:19.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tltmt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:20.075: INFO: stderr: ""
Aug 27 19:12:20.075: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 19:12:20.075: INFO: validating pod update-demo-nautilus-tltmt
Aug 27 19:12:20.078: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 19:12:20.078: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 19:12:20.078: INFO: update-demo-nautilus-tltmt is verified up and running
STEP: scaling up the replication controller
Aug 27 19:12:20.079: INFO: scanned /root for discovery docs: 
Aug 27 19:12:20.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:21.243: INFO: stderr: ""
Aug 27 19:12:21.243: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 19:12:21.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:21.352: INFO: stderr: ""
Aug 27 19:12:21.352: INFO: stdout: "update-demo-nautilus-8vrlw update-demo-nautilus-tltmt "
Aug 27 19:12:21.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vrlw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:21.453: INFO: stderr: ""
Aug 27 19:12:21.453: INFO: stdout: ""
Aug 27 19:12:21.453: INFO: update-demo-nautilus-8vrlw is created but not running
Aug 27 19:12:26.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:26.617: INFO: stderr: ""
Aug 27 19:12:26.617: INFO: stdout: "update-demo-nautilus-8vrlw update-demo-nautilus-tltmt "
Aug 27 19:12:26.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vrlw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:26.721: INFO: stderr: ""
Aug 27 19:12:26.721: INFO: stdout: "true"
Aug 27 19:12:26.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vrlw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:26.819: INFO: stderr: ""
Aug 27 19:12:26.819: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 19:12:26.819: INFO: validating pod update-demo-nautilus-8vrlw
Aug 27 19:12:26.823: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 19:12:26.823: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 19:12:26.823: INFO: update-demo-nautilus-8vrlw is verified up and running
Aug 27 19:12:26.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tltmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:26.922: INFO: stderr: ""
Aug 27 19:12:26.922: INFO: stdout: "true"
Aug 27 19:12:26.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tltmt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:27.027: INFO: stderr: ""
Aug 27 19:12:27.027: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 19:12:27.027: INFO: validating pod update-demo-nautilus-tltmt
Aug 27 19:12:27.030: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 19:12:27.030: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 19:12:27.030: INFO: update-demo-nautilus-tltmt is verified up and running
STEP: using delete to clean up resources
Aug 27 19:12:27.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:27.128: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 19:12:27.128: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 27 19:12:27.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-rwmqd'
Aug 27 19:12:27.245: INFO: stderr: "No resources found.\n"
Aug 27 19:12:27.245: INFO: stdout: ""
Aug 27 19:12:27.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-rwmqd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 19:12:27.344: INFO: stderr: ""
Aug 27 19:12:27.344: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:12:27.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rwmqd" for this suite.
Aug 27 19:12:49.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:12:49.533: INFO: namespace: e2e-tests-kubectl-rwmqd, resource: bindings, ignored listing per whitelist
Aug 27 19:12:49.595: INFO: namespace e2e-tests-kubectl-rwmqd deletion completed in 22.247247071s

• [SLOW TEST:55.012 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:12:49.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Aug 27 19:12:50.283: INFO: created pod pod-service-account-defaultsa
Aug 27 19:12:50.283: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 27 19:12:50.296: INFO: created pod pod-service-account-mountsa
Aug 27 19:12:50.296: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 27 19:12:50.357: INFO: created pod pod-service-account-nomountsa
Aug 27 19:12:50.357: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 27 19:12:50.362: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 27 19:12:50.362: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 27 19:12:50.385: INFO: created pod pod-service-account-mountsa-mountspec
Aug 27 19:12:50.385: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 27 19:12:50.433: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 27 19:12:50.433: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 27 19:12:50.495: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 27 19:12:50.495: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 27 19:12:50.524: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 27 19:12:50.524: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 27 19:12:50.566: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 27 19:12:50.566: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:12:50.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-cj8sc" for this suite.
Aug 27 19:13:22.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:13:22.703: INFO: namespace: e2e-tests-svcaccounts-cj8sc, resource: bindings, ignored listing per whitelist
Aug 27 19:13:22.762: INFO: namespace e2e-tests-svcaccounts-cj8sc deletion completed in 32.123274587s

• [SLOW TEST:33.167 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:13:22.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 27 19:13:37.801: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:13:39.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-8fqp8" for this suite.
Aug 27 19:14:05.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:14:06.000: INFO: namespace: e2e-tests-replicaset-8fqp8, resource: bindings, ignored listing per whitelist
Aug 27 19:14:06.010: INFO: namespace e2e-tests-replicaset-8fqp8 deletion completed in 26.394718756s

• [SLOW TEST:43.247 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:14:06.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-7a52464b-e899-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 19:14:07.013: INFO: Waiting up to 5m0s for pod "pod-secrets-7a59aab3-e899-11ea-b58c-0242ac11000b" in namespace "e2e-tests-secrets-rptd8" to be "success or failure"
Aug 27 19:14:07.069: INFO: Pod "pod-secrets-7a59aab3-e899-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 55.437443ms
Aug 27 19:14:09.072: INFO: Pod "pod-secrets-7a59aab3-e899-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059388236s
Aug 27 19:14:11.098: INFO: Pod "pod-secrets-7a59aab3-e899-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08483058s
Aug 27 19:14:13.102: INFO: Pod "pod-secrets-7a59aab3-e899-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 6.089162986s
Aug 27 19:14:15.106: INFO: Pod "pod-secrets-7a59aab3-e899-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092953378s
STEP: Saw pod success
Aug 27 19:14:15.106: INFO: Pod "pod-secrets-7a59aab3-e899-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 19:14:15.109: INFO: Trying to get logs from node hunter-worker pod pod-secrets-7a59aab3-e899-11ea-b58c-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Aug 27 19:14:15.180: INFO: Waiting for pod pod-secrets-7a59aab3-e899-11ea-b58c-0242ac11000b to disappear
Aug 27 19:14:15.224: INFO: Pod pod-secrets-7a59aab3-e899-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:14:15.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rptd8" for this suite.
Aug 27 19:14:21.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:14:21.303: INFO: namespace: e2e-tests-secrets-rptd8, resource: bindings, ignored listing per whitelist
Aug 27 19:14:21.344: INFO: namespace e2e-tests-secrets-rptd8 deletion completed in 6.115791524s

• [SLOW TEST:15.334 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:14:21.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Aug 27 19:14:26.009: INFO: Successfully updated pod "labelsupdate8308fe74-e899-11ea-b58c-0242ac11000b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:14:30.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-b7cb2" for this suite.
Aug 27 19:14:52.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:14:52.114: INFO: namespace: e2e-tests-downward-api-b7cb2, resource: bindings, ignored listing per whitelist
Aug 27 19:14:52.244: INFO: namespace e2e-tests-downward-api-b7cb2 deletion completed in 22.202734269s

• [SLOW TEST:30.899 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:14:52.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 19:14:52.565: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 27 19:14:52.600: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:14:52.602: INFO: Number of nodes with available pods: 0
Aug 27 19:14:52.603: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:14:53.607: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:14:53.646: INFO: Number of nodes with available pods: 0
Aug 27 19:14:53.646: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:14:55.224: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:14:55.778: INFO: Number of nodes with available pods: 0
Aug 27 19:14:55.778: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:14:56.737: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:14:56.740: INFO: Number of nodes with available pods: 0
Aug 27 19:14:56.740: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:14:57.608: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:14:57.612: INFO: Number of nodes with available pods: 0
Aug 27 19:14:57.612: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:14:58.607: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:14:58.610: INFO: Number of nodes with available pods: 0
Aug 27 19:14:58.610: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:14:59.609: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:14:59.614: INFO: Number of nodes with available pods: 0
Aug 27 19:14:59.614: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:15:00.608: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:00.612: INFO: Number of nodes with available pods: 0
Aug 27 19:15:00.612: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:15:01.608: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:01.612: INFO: Number of nodes with available pods: 1
Aug 27 19:15:01.612: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:15:02.609: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:02.611: INFO: Number of nodes with available pods: 2
Aug 27 19:15:02.611: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 27 19:15:02.657: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:02.657: INFO: Wrong image for pod: daemon-set-vpmtn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:02.672: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:03.706: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:03.706: INFO: Wrong image for pod: daemon-set-vpmtn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:03.712: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:04.676: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:04.676: INFO: Wrong image for pod: daemon-set-vpmtn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:04.679: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:05.730: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:05.730: INFO: Wrong image for pod: daemon-set-vpmtn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:05.730: INFO: Pod daemon-set-vpmtn is not available
Aug 27 19:15:05.733: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:06.676: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:06.676: INFO: Wrong image for pod: daemon-set-vpmtn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:06.676: INFO: Pod daemon-set-vpmtn is not available
Aug 27 19:15:06.679: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:07.832: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:07.832: INFO: Wrong image for pod: daemon-set-vpmtn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:07.832: INFO: Pod daemon-set-vpmtn is not available
Aug 27 19:15:07.835: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:08.809: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:08.809: INFO: Pod daemon-set-gtv75 is not available
Aug 27 19:15:09.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:09.676: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:09.676: INFO: Pod daemon-set-gtv75 is not available
Aug 27 19:15:09.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:10.676: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:10.676: INFO: Pod daemon-set-gtv75 is not available
Aug 27 19:15:10.680: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:11.675: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:11.675: INFO: Pod daemon-set-gtv75 is not available
Aug 27 19:15:11.678: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:12.907: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:12.907: INFO: Pod daemon-set-gtv75 is not available
Aug 27 19:15:12.910: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:13.760: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:13.760: INFO: Pod daemon-set-gtv75 is not available
Aug 27 19:15:13.764: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:14.677: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:14.677: INFO: Pod daemon-set-gtv75 is not available
Aug 27 19:15:14.682: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:15.678: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:15.679: INFO: Pod daemon-set-gtv75 is not available
Aug 27 19:15:15.682: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:16.869: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:16.895: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:17.765: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:17.811: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:18.676: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:18.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:20.074: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:20.079: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:20.677: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:20.677: INFO: Pod daemon-set-794q6 is not available
Aug 27 19:15:20.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:21.874: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:21.874: INFO: Pod daemon-set-794q6 is not available
Aug 27 19:15:21.878: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:22.676: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:22.676: INFO: Pod daemon-set-794q6 is not available
Aug 27 19:15:22.679: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:23.676: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:23.676: INFO: Pod daemon-set-794q6 is not available
Aug 27 19:15:23.680: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:24.760: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:24.760: INFO: Pod daemon-set-794q6 is not available
Aug 27 19:15:24.764: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:25.677: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:25.677: INFO: Pod daemon-set-794q6 is not available
Aug 27 19:15:25.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:26.677: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:26.677: INFO: Pod daemon-set-794q6 is not available
Aug 27 19:15:26.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:27.677: INFO: Wrong image for pod: daemon-set-794q6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 19:15:27.677: INFO: Pod daemon-set-794q6 is not available
Aug 27 19:15:27.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:28.676: INFO: Pod daemon-set-z2c6q is not available
Aug 27 19:15:28.680: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 27 19:15:28.683: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:28.684: INFO: Number of nodes with available pods: 1
Aug 27 19:15:28.684: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:15:29.774: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:29.777: INFO: Number of nodes with available pods: 1
Aug 27 19:15:29.777: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:15:30.689: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:30.691: INFO: Number of nodes with available pods: 1
Aug 27 19:15:30.691: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:15:31.822: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:31.826: INFO: Number of nodes with available pods: 1
Aug 27 19:15:31.826: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:15:32.815: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:32.819: INFO: Number of nodes with available pods: 1
Aug 27 19:15:32.819: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:15:34.006: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:34.008: INFO: Number of nodes with available pods: 1
Aug 27 19:15:34.008: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:15:34.690: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:34.694: INFO: Number of nodes with available pods: 1
Aug 27 19:15:34.694: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:15:35.857: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:35.862: INFO: Number of nodes with available pods: 1
Aug 27 19:15:35.862: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 19:15:36.689: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 19:15:36.692: INFO: Number of nodes with available pods: 2
Aug 27 19:15:36.692: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-ck8sq, will wait for the garbage collector to delete the pods
Aug 27 19:15:36.763: INFO: Deleting DaemonSet.extensions daemon-set took: 5.642255ms
Aug 27 19:15:36.863: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.266399ms
Aug 27 19:15:48.524: INFO: Number of nodes with available pods: 0
Aug 27 19:15:48.524: INFO: Number of running nodes: 0, number of available pods: 0
Aug 27 19:15:48.598: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-ck8sq/daemonsets","resourceVersion":"2707454"},"items":null}

Aug 27 19:15:48.601: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-ck8sq/pods","resourceVersion":"2707454"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:15:48.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-ck8sq" for this suite.
Aug 27 19:15:54.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:15:54.665: INFO: namespace: e2e-tests-daemonsets-ck8sq, resource: bindings, ignored listing per whitelist
Aug 27 19:15:54.719: INFO: namespace e2e-tests-daemonsets-ck8sq deletion completed in 6.105493432s

• [SLOW TEST:62.474 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:15:54.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 27 19:15:54.878: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4td9f,SelfLink:/api/v1/namespaces/e2e-tests-watch-4td9f/configmaps/e2e-watch-test-label-changed,UID:bab447b4-e899-11ea-a485-0242ac120004,ResourceVersion:2707503,Generation:0,CreationTimestamp:2020-08-27 19:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 27 19:15:54.879: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4td9f,SelfLink:/api/v1/namespaces/e2e-tests-watch-4td9f/configmaps/e2e-watch-test-label-changed,UID:bab447b4-e899-11ea-a485-0242ac120004,ResourceVersion:2707504,Generation:0,CreationTimestamp:2020-08-27 19:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 27 19:15:54.879: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4td9f,SelfLink:/api/v1/namespaces/e2e-tests-watch-4td9f/configmaps/e2e-watch-test-label-changed,UID:bab447b4-e899-11ea-a485-0242ac120004,ResourceVersion:2707505,Generation:0,CreationTimestamp:2020-08-27 19:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 27 19:16:04.908: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4td9f,SelfLink:/api/v1/namespaces/e2e-tests-watch-4td9f/configmaps/e2e-watch-test-label-changed,UID:bab447b4-e899-11ea-a485-0242ac120004,ResourceVersion:2707526,Generation:0,CreationTimestamp:2020-08-27 19:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 27 19:16:04.908: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4td9f,SelfLink:/api/v1/namespaces/e2e-tests-watch-4td9f/configmaps/e2e-watch-test-label-changed,UID:bab447b4-e899-11ea-a485-0242ac120004,ResourceVersion:2707527,Generation:0,CreationTimestamp:2020-08-27 19:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 27 19:16:04.908: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4td9f,SelfLink:/api/v1/namespaces/e2e-tests-watch-4td9f/configmaps/e2e-watch-test-label-changed,UID:bab447b4-e899-11ea-a485-0242ac120004,ResourceVersion:2707528,Generation:0,CreationTimestamp:2020-08-27 19:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:16:04.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4td9f" for this suite.
Aug 27 19:16:10.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:16:11.004: INFO: namespace: e2e-tests-watch-4td9f, resource: bindings, ignored listing per whitelist
Aug 27 19:16:11.030: INFO: namespace e2e-tests-watch-4td9f deletion completed in 6.101463456s

• [SLOW TEST:16.311 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:16:11.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 27 19:16:11.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-t5kdl'
Aug 27 19:16:11.317: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 19:16:11.317: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Aug 27 19:16:11.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-t5kdl'
Aug 27 19:16:11.466: INFO: stderr: ""
Aug 27 19:16:11.466: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:16:11.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-t5kdl" for this suite.
Aug 27 19:16:33.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:16:33.540: INFO: namespace: e2e-tests-kubectl-t5kdl, resource: bindings, ignored listing per whitelist
Aug 27 19:16:33.593: INFO: namespace e2e-tests-kubectl-t5kdl deletion completed in 22.110431745s

• [SLOW TEST:22.563 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:16:33.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 19:16:33.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2056814-e899-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-flc2n" to be "success or failure"
Aug 27 19:16:34.039: INFO: Pod "downwardapi-volume-d2056814-e899-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.200489ms
Aug 27 19:16:36.043: INFO: Pod "downwardapi-volume-d2056814-e899-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04878938s
Aug 27 19:16:38.168: INFO: Pod "downwardapi-volume-d2056814-e899-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173226289s
Aug 27 19:16:40.172: INFO: Pod "downwardapi-volume-d2056814-e899-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177275882s
Aug 27 19:16:42.312: INFO: Pod "downwardapi-volume-d2056814-e899-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.316924424s
STEP: Saw pod success
Aug 27 19:16:42.312: INFO: Pod "downwardapi-volume-d2056814-e899-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 19:16:42.314: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-d2056814-e899-11ea-b58c-0242ac11000b container client-container: 
STEP: delete the pod
Aug 27 19:16:42.388: INFO: Waiting for pod downwardapi-volume-d2056814-e899-11ea-b58c-0242ac11000b to disappear
Aug 27 19:16:42.392: INFO: Pod downwardapi-volume-d2056814-e899-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:16:42.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-flc2n" for this suite.
Aug 27 19:16:48.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:16:48.468: INFO: namespace: e2e-tests-downward-api-flc2n, resource: bindings, ignored listing per whitelist
Aug 27 19:16:48.474: INFO: namespace e2e-tests-downward-api-flc2n deletion completed in 6.077457975s

• [SLOW TEST:14.881 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:16:48.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0827 19:17:02.424025       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 19:17:02.424: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:17:02.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-8hw7k" for this suite.
Aug 27 19:17:12.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:17:12.744: INFO: namespace: e2e-tests-gc-8hw7k, resource: bindings, ignored listing per whitelist
Aug 27 19:17:12.785: INFO: namespace e2e-tests-gc-8hw7k deletion completed in 10.293965096s

• [SLOW TEST:24.312 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:17:12.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 19:17:12.884: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 27 19:17:17.889: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 27 19:17:17.889: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 27 19:17:19.893: INFO: Creating deployment "test-rollover-deployment"
Aug 27 19:17:19.910: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 27 19:17:21.916: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 27 19:17:21.921: INFO: Ensure that both replica sets have 1 created replica
Aug 27 19:17:21.926: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 27 19:17:21.932: INFO: Updating deployment test-rollover-deployment
Aug 27 19:17:21.932: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 27 19:17:23.963: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 27 19:17:23.970: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 27 19:17:23.976: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 19:17:23.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152642, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 19:17:25.984: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 19:17:25.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152642, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 19:17:28.022: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 19:17:28.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 19:17:29.984: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 19:17:29.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 19:17:31.983: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 19:17:31.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 19:17:33.983: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 19:17:33.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 19:17:35.984: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 19:17:35.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 19:17:38.197: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 19:17:38.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152639, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 19:17:39.983: INFO: 
Aug 27 19:17:39.983: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Aug 27 19:17:39.991: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-kjv5n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kjv5n/deployments/test-rollover-deployment,UID:ed69048c-e899-11ea-a485-0242ac120004,ResourceVersion:2708027,Generation:2,CreationTimestamp:2020-08-27 19:17:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-27 19:17:19 +0000 UTC 2020-08-27 19:17:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-27 19:17:39 +0000 UTC 2020-08-27 19:17:19 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 27 19:17:39.995: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-kjv5n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kjv5n/replicasets/test-rollover-deployment-5b8479fdb6,UID:eea0282b-e899-11ea-a485-0242ac120004,ResourceVersion:2708015,Generation:2,CreationTimestamp:2020-08-27 19:17:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ed69048c-e899-11ea-a485-0242ac120004 0xc0024a37b7 0xc0024a37b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 27 19:17:39.995: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 27 19:17:39.995: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-kjv5n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kjv5n/replicasets/test-rollover-controller,UID:e938c50a-e899-11ea-a485-0242ac120004,ResourceVersion:2708026,Generation:2,CreationTimestamp:2020-08-27 19:17:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ed69048c-e899-11ea-a485-0242ac120004 0xc0024a3627 0xc0024a3628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 19:17:39.995: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-kjv5n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kjv5n/replicasets/test-rollover-deployment-58494b7559,UID:ed6ce64c-e899-11ea-a485-0242ac120004,ResourceVersion:2707976,Generation:2,CreationTimestamp:2020-08-27 19:17:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ed69048c-e899-11ea-a485-0242ac120004 0xc0024a36e7 0xc0024a36e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 19:17:40.002: INFO: Pod "test-rollover-deployment-5b8479fdb6-zjf5x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-zjf5x,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-kjv5n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kjv5n/pods/test-rollover-deployment-5b8479fdb6-zjf5x,UID:eeb949d1-e899-11ea-a485-0242ac120004,ResourceVersion:2707993,Generation:0,CreationTimestamp:2020-08-27 19:17:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 eea0282b-e899-11ea-a485-0242ac120004 0xc00261de57 0xc00261de58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8qm2g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8qm2g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-8qm2g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00261dee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d7e0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:17:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:17:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:17:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:17:22 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.16,StartTime:2020-08-27 19:17:22 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-27 19:17:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://21ef35239a133b59cb8f32161faa3d6fcf06fc9b8ff61d7fd7eba9f06f80a1f4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:17:40.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-kjv5n" for this suite.
Aug 27 19:17:50.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:17:50.303: INFO: namespace: e2e-tests-deployment-kjv5n, resource: bindings, ignored listing per whitelist
Aug 27 19:17:50.358: INFO: namespace e2e-tests-deployment-kjv5n deletion completed in 10.352509846s

• [SLOW TEST:37.572 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:17:50.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:18:00.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-phv4z" for this suite.
Aug 27 19:18:24.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:18:24.570: INFO: namespace: e2e-tests-replication-controller-phv4z, resource: bindings, ignored listing per whitelist
Aug 27 19:18:24.590: INFO: namespace e2e-tests-replication-controller-phv4z deletion completed in 24.161554329s

• [SLOW TEST:34.231 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:18:24.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Aug 27 19:18:24.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qtnll'
Aug 27 19:18:25.190: INFO: stderr: ""
Aug 27 19:18:25.191: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 19:18:25.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qtnll'
Aug 27 19:18:25.329: INFO: stderr: ""
Aug 27 19:18:25.329: INFO: stdout: "update-demo-nautilus-n4t6w update-demo-nautilus-vppxt "
Aug 27 19:18:25.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n4t6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qtnll'
Aug 27 19:18:25.422: INFO: stderr: ""
Aug 27 19:18:25.423: INFO: stdout: ""
Aug 27 19:18:25.423: INFO: update-demo-nautilus-n4t6w is created but not running
Aug 27 19:18:30.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qtnll'
Aug 27 19:18:30.535: INFO: stderr: ""
Aug 27 19:18:30.535: INFO: stdout: "update-demo-nautilus-n4t6w update-demo-nautilus-vppxt "
Aug 27 19:18:30.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n4t6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qtnll'
Aug 27 19:18:30.637: INFO: stderr: ""
Aug 27 19:18:30.638: INFO: stdout: "true"
Aug 27 19:18:30.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n4t6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qtnll'
Aug 27 19:18:30.739: INFO: stderr: ""
Aug 27 19:18:30.739: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 19:18:30.739: INFO: validating pod update-demo-nautilus-n4t6w
Aug 27 19:18:30.742: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 19:18:30.742: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 19:18:30.742: INFO: update-demo-nautilus-n4t6w is verified up and running
Aug 27 19:18:30.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vppxt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qtnll'
Aug 27 19:18:30.852: INFO: stderr: ""
Aug 27 19:18:30.852: INFO: stdout: "true"
Aug 27 19:18:30.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vppxt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qtnll'
Aug 27 19:18:30.943: INFO: stderr: ""
Aug 27 19:18:30.943: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 19:18:30.943: INFO: validating pod update-demo-nautilus-vppxt
Aug 27 19:18:30.947: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 19:18:30.947: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 19:18:30.947: INFO: update-demo-nautilus-vppxt is verified up and running
STEP: using delete to clean up resources
Aug 27 19:18:30.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qtnll'
Aug 27 19:18:31.158: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 19:18:31.158: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 27 19:18:31.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-qtnll'
Aug 27 19:18:31.290: INFO: stderr: "No resources found.\n"
Aug 27 19:18:31.290: INFO: stdout: ""
Aug 27 19:18:31.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-qtnll -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 19:18:31.381: INFO: stderr: ""
Aug 27 19:18:31.381: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:18:31.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qtnll" for this suite.
Aug 27 19:18:53.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:18:53.585: INFO: namespace: e2e-tests-kubectl-qtnll, resource: bindings, ignored listing per whitelist
Aug 27 19:18:53.641: INFO: namespace e2e-tests-kubectl-qtnll deletion completed in 22.257030493s

• [SLOW TEST:29.051 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:18:53.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 27 19:18:53.779: INFO: Waiting up to 5m0s for pod "downward-api-25594ad9-e89a-11ea-b58c-0242ac11000b" in namespace "e2e-tests-downward-api-klmb6" to be "success or failure"
Aug 27 19:18:53.824: INFO: Pod "downward-api-25594ad9-e89a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.876642ms
Aug 27 19:18:56.056: INFO: Pod "downward-api-25594ad9-e89a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276725672s
Aug 27 19:18:58.060: INFO: Pod "downward-api-25594ad9-e89a-11ea-b58c-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.281082862s
Aug 27 19:19:00.064: INFO: Pod "downward-api-25594ad9-e89a-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.284629999s
STEP: Saw pod success
Aug 27 19:19:00.064: INFO: Pod "downward-api-25594ad9-e89a-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 19:19:00.066: INFO: Trying to get logs from node hunter-worker pod downward-api-25594ad9-e89a-11ea-b58c-0242ac11000b container dapi-container: 
STEP: delete the pod
Aug 27 19:19:00.094: INFO: Waiting for pod downward-api-25594ad9-e89a-11ea-b58c-0242ac11000b to disappear
Aug 27 19:19:00.097: INFO: Pod downward-api-25594ad9-e89a-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:19:00.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-klmb6" for this suite.
Aug 27 19:19:06.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:19:06.205: INFO: namespace: e2e-tests-downward-api-klmb6, resource: bindings, ignored listing per whitelist
Aug 27 19:19:06.221: INFO: namespace e2e-tests-downward-api-klmb6 deletion completed in 6.120717352s

• [SLOW TEST:12.580 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:19:06.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 19:19:06.336: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 27 19:19:06.354: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 27 19:19:11.487: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 27 19:19:11.487: INFO: Creating deployment "test-rolling-update-deployment"
Aug 27 19:19:11.492: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 27 19:19:11.638: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 27 19:19:13.816: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 27 19:19:13.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152752, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152752, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152752, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152751, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 19:19:15.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152752, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152752, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152752, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152751, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 19:19:17.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152752, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152752, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152752, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734152751, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 19:19:19.981: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Aug 27 19:19:20.512: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-gstnk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gstnk/deployments/test-rolling-update-deployment,UID:2fecfa4b-e89a-11ea-a485-0242ac120004,ResourceVersion:2708402,Generation:1,CreationTimestamp:2020-08-27 19:19:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-27 19:19:12 +0000 UTC 2020-08-27 19:19:12 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-27 19:19:19 +0000 UTC 2020-08-27 19:19:11 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 27 19:19:20.568: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-gstnk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gstnk/replicasets/test-rolling-update-deployment-75db98fb4c,UID:30047256-e89a-11ea-a485-0242ac120004,ResourceVersion:2708393,Generation:1,CreationTimestamp:2020-08-27 19:19:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 2fecfa4b-e89a-11ea-a485-0242ac120004 0xc0018d9e67 0xc0018d9e68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 27 19:19:20.568: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 27 19:19:20.568: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-gstnk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gstnk/replicasets/test-rolling-update-controller,UID:2cdb0162-e89a-11ea-a485-0242ac120004,ResourceVersion:2708401,Generation:2,CreationTimestamp:2020-08-27 19:19:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 2fecfa4b-e89a-11ea-a485-0242ac120004 0xc0018d9da7 0xc0018d9da8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 19:19:20.571: INFO: Pod "test-rolling-update-deployment-75db98fb4c-hv42z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-hv42z,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-gstnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gstnk/pods/test-rolling-update-deployment-75db98fb4c-hv42z,UID:3038fbd0-e89a-11ea-a485-0242ac120004,ResourceVersion:2708392,Generation:0,CreationTimestamp:2020-08-27 19:19:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 30047256-e89a-11ea-a485-0242ac120004 0xc0010dd367 0xc0010dd368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2z4bw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2z4bw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-2z4bw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0010dd3e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0010dd400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:19:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:19:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:19:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 19:19:12 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.100,StartTime:2020-08-27 19:19:12 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-27 19:19:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://453bb80ecaa038076e70cfe70b0d008dcb5af11185e6da7ef93bbd26b73ef581}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:19:20.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-gstnk" for this suite.
Aug 27 19:19:28.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:19:28.641: INFO: namespace: e2e-tests-deployment-gstnk, resource: bindings, ignored listing per whitelist
Aug 27 19:19:28.692: INFO: namespace e2e-tests-deployment-gstnk deletion completed in 8.119011566s

• [SLOW TEST:22.470 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:19:28.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Aug 27 19:19:28.901: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:19:42.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-qcr78" for this suite.
Aug 27 19:19:54.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:19:54.375: INFO: namespace: e2e-tests-init-container-qcr78, resource: bindings, ignored listing per whitelist
Aug 27 19:19:54.405: INFO: namespace e2e-tests-init-container-qcr78 deletion completed in 12.108079609s

• [SLOW TEST:25.712 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:19:54.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-49aa577e-e89a-11ea-b58c-0242ac11000b
Aug 27 19:19:54.733: INFO: Pod name my-hostname-basic-49aa577e-e89a-11ea-b58c-0242ac11000b: Found 0 pods out of 1
Aug 27 19:20:00.314: INFO: Pod name my-hostname-basic-49aa577e-e89a-11ea-b58c-0242ac11000b: Found 1 pods out of 1
Aug 27 19:20:00.314: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-49aa577e-e89a-11ea-b58c-0242ac11000b" are running
Aug 27 19:20:00.317: INFO: Pod "my-hostname-basic-49aa577e-e89a-11ea-b58c-0242ac11000b-cr7vl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 19:19:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 19:19:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 19:19:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 19:19:54 +0000 UTC Reason: Message:}])
Aug 27 19:20:00.317: INFO: Trying to dial the pod
Aug 27 19:20:05.429: INFO: Controller my-hostname-basic-49aa577e-e89a-11ea-b58c-0242ac11000b: Got expected result from replica 1 [my-hostname-basic-49aa577e-e89a-11ea-b58c-0242ac11000b-cr7vl]: "my-hostname-basic-49aa577e-e89a-11ea-b58c-0242ac11000b-cr7vl", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:20:05.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-h6cpn" for this suite.
Aug 27 19:20:11.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:20:11.550: INFO: namespace: e2e-tests-replication-controller-h6cpn, resource: bindings, ignored listing per whitelist
Aug 27 19:20:11.563: INFO: namespace e2e-tests-replication-controller-h6cpn deletion completed in 6.129788186s

• [SLOW TEST:17.158 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:20:11.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-53cd1b01-e89a-11ea-b58c-0242ac11000b
STEP: Creating a pod to test consume secrets
Aug 27 19:20:11.715: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-53cfb680-e89a-11ea-b58c-0242ac11000b" in namespace "e2e-tests-projected-f46lj" to be "success or failure"
Aug 27 19:20:11.804: INFO: Pod "pod-projected-secrets-53cfb680-e89a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 89.492262ms
Aug 27 19:20:13.808: INFO: Pod "pod-projected-secrets-53cfb680-e89a-11ea-b58c-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092679654s
Aug 27 19:20:15.812: INFO: Pod "pod-projected-secrets-53cfb680-e89a-11ea-b58c-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096583583s
STEP: Saw pod success
Aug 27 19:20:15.812: INFO: Pod "pod-projected-secrets-53cfb680-e89a-11ea-b58c-0242ac11000b" satisfied condition "success or failure"
Aug 27 19:20:15.814: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-53cfb680-e89a-11ea-b58c-0242ac11000b container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 19:20:15.854: INFO: Waiting for pod pod-projected-secrets-53cfb680-e89a-11ea-b58c-0242ac11000b to disappear
Aug 27 19:20:15.866: INFO: Pod pod-projected-secrets-53cfb680-e89a-11ea-b58c-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:20:15.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f46lj" for this suite.
Aug 27 19:20:21.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:20:21.937: INFO: namespace: e2e-tests-projected-f46lj, resource: bindings, ignored listing per whitelist
Aug 27 19:20:22.005: INFO: namespace e2e-tests-projected-f46lj deletion completed in 6.135303514s

• [SLOW TEST:10.442 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 19:20:22.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-6dsvv
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Aug 27 19:20:22.147: INFO: Found 0 stateful pods, waiting for 3
Aug 27 19:20:32.152: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 19:20:32.152: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 19:20:32.152: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 27 19:20:42.351: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 19:20:42.351: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 19:20:42.351: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 19:20:42.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6dsvv ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 27 19:20:43.334: INFO: stderr: "I0827 19:20:42.679494    3813 log.go:172] (0xc0006c4370) (0xc000736640) Create stream\nI0827 19:20:42.679563    3813 log.go:172] (0xc0006c4370) (0xc000736640) Stream added, broadcasting: 1\nI0827 19:20:42.682426    3813 log.go:172] (0xc0006c4370) Reply frame received for 1\nI0827 19:20:42.682480    3813 log.go:172] (0xc0006c4370) (0xc0005d2d20) Create stream\nI0827 19:20:42.682506    3813 log.go:172] (0xc0006c4370) (0xc0005d2d20) Stream added, broadcasting: 3\nI0827 19:20:42.683723    3813 log.go:172] (0xc0006c4370) Reply frame received for 3\nI0827 19:20:42.683770    3813 log.go:172] (0xc0006c4370) (0xc000220000) Create stream\nI0827 19:20:42.683802    3813 log.go:172] (0xc0006c4370) (0xc000220000) Stream added, broadcasting: 5\nI0827 19:20:42.684915    3813 log.go:172] (0xc0006c4370) Reply frame received for 5\nI0827 19:20:43.325614    3813 log.go:172] (0xc0006c4370) Data frame received for 3\nI0827 19:20:43.325657    3813 log.go:172] (0xc0005d2d20) (3) Data frame handling\nI0827 19:20:43.325675    3813 log.go:172] (0xc0005d2d20) (3) Data frame sent\nI0827 19:20:43.326453    3813 log.go:172] (0xc0006c4370) Data frame received for 3\nI0827 19:20:43.326485    3813 log.go:172] (0xc0005d2d20) (3) Data frame handling\nI0827 19:20:43.326511    3813 log.go:172] (0xc0006c4370) Data frame received for 5\nI0827 19:20:43.326528    3813 log.go:172] (0xc000220000) (5) Data frame handling\nI0827 19:20:43.327988    3813 log.go:172] (0xc0006c4370) Data frame received for 1\nI0827 19:20:43.328008    3813 log.go:172] (0xc000736640) (1) Data frame handling\nI0827 19:20:43.328016    3813 log.go:172] (0xc000736640) (1) Data frame sent\nI0827 19:20:43.328027    3813 log.go:172] (0xc0006c4370) (0xc000736640) Stream removed, broadcasting: 1\nI0827 19:20:43.328069    3813 log.go:172] (0xc0006c4370) Go away received\nI0827 19:20:43.328244    3813 log.go:172] (0xc0006c4370) (0xc000736640) Stream removed, broadcasting: 1\nI0827 19:20:43.328266    3813 log.go:172] (0xc0006c4370) (0xc0005d2d20) Stream removed, broadcasting: 3\nI0827 19:20:43.328281    3813 log.go:172] (0xc0006c4370) (0xc000220000) Stream removed, broadcasting: 5\n"
Aug 27 19:20:43.334: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 27 19:20:43.334: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 27 19:20:53.382: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 27 19:21:03.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6dsvv ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:21:03.723: INFO: stderr: "I0827 19:21:03.610707    3836 log.go:172] (0xc00077a160) (0xc0006e6640) Create stream\nI0827 19:21:03.610768    3836 log.go:172] (0xc00077a160) (0xc0006e6640) Stream added, broadcasting: 1\nI0827 19:21:03.613202    3836 log.go:172] (0xc00077a160) Reply frame received for 1\nI0827 19:21:03.613259    3836 log.go:172] (0xc00077a160) (0xc00023edc0) Create stream\nI0827 19:21:03.613277    3836 log.go:172] (0xc00077a160) (0xc00023edc0) Stream added, broadcasting: 3\nI0827 19:21:03.614485    3836 log.go:172] (0xc00077a160) Reply frame received for 3\nI0827 19:21:03.614527    3836 log.go:172] (0xc00077a160) (0xc000222000) Create stream\nI0827 19:21:03.614546    3836 log.go:172] (0xc00077a160) (0xc000222000) Stream added, broadcasting: 5\nI0827 19:21:03.615474    3836 log.go:172] (0xc00077a160) Reply frame received for 5\nI0827 19:21:03.711502    3836 log.go:172] (0xc00077a160) Data frame received for 5\nI0827 19:21:03.711544    3836 log.go:172] (0xc000222000) (5) Data frame handling\nI0827 19:21:03.711609    3836 log.go:172] (0xc00077a160) Data frame received for 3\nI0827 19:21:03.711645    3836 log.go:172] (0xc00023edc0) (3) Data frame handling\nI0827 19:21:03.711686    3836 log.go:172] (0xc00023edc0) (3) Data frame sent\nI0827 19:21:03.711705    3836 log.go:172] (0xc00077a160) Data frame received for 3\nI0827 19:21:03.711713    3836 log.go:172] (0xc00023edc0) (3) Data frame handling\nI0827 19:21:03.713310    3836 log.go:172] (0xc00077a160) Data frame received for 1\nI0827 19:21:03.713345    3836 log.go:172] (0xc0006e6640) (1) Data frame handling\nI0827 19:21:03.713366    3836 log.go:172] (0xc0006e6640) (1) Data frame sent\nI0827 19:21:03.713387    3836 log.go:172] (0xc00077a160) (0xc0006e6640) Stream removed, broadcasting: 1\nI0827 19:21:03.713466    3836 log.go:172] (0xc00077a160) Go away received\nI0827 19:21:03.713593    3836 log.go:172] (0xc00077a160) (0xc0006e6640) Stream removed, broadcasting: 1\nI0827 19:21:03.713627    3836 log.go:172] (0xc00077a160) (0xc00023edc0) Stream removed, broadcasting: 3\nI0827 19:21:03.713643    3836 log.go:172] (0xc00077a160) (0xc000222000) Stream removed, broadcasting: 5\n"
Aug 27 19:21:03.723: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 27 19:21:03.723: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 27 19:21:13.767: INFO: Waiting for StatefulSet e2e-tests-statefulset-6dsvv/ss2 to complete update
Aug 27 19:21:13.767: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 27 19:21:13.767: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 27 19:21:13.767: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 27 19:21:23.809: INFO: Waiting for StatefulSet e2e-tests-statefulset-6dsvv/ss2 to complete update
Aug 27 19:21:23.809: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 27 19:21:23.809: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 27 19:21:33.779: INFO: Waiting for StatefulSet e2e-tests-statefulset-6dsvv/ss2 to complete update
Aug 27 19:21:33.779: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Aug 27 19:21:43.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6dsvv ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 27 19:21:44.114: INFO: stderr: "I0827 19:21:43.947896    3859 log.go:172] (0xc000702420) (0xc0006132c0) Create stream\nI0827 19:21:43.947971    3859 log.go:172] (0xc000702420) (0xc0006132c0) Stream added, broadcasting: 1\nI0827 19:21:43.951175    3859 log.go:172] (0xc000702420) Reply frame received for 1\nI0827 19:21:43.951228    3859 log.go:172] (0xc000702420) (0xc000746000) Create stream\nI0827 19:21:43.951243    3859 log.go:172] (0xc000702420) (0xc000746000) Stream added, broadcasting: 3\nI0827 19:21:43.952135    3859 log.go:172] (0xc000702420) Reply frame received for 3\nI0827 19:21:43.952159    3859 log.go:172] (0xc000702420) (0xc000613360) Create stream\nI0827 19:21:43.952168    3859 log.go:172] (0xc000702420) (0xc000613360) Stream added, broadcasting: 5\nI0827 19:21:43.953222    3859 log.go:172] (0xc000702420) Reply frame received for 5\nI0827 19:21:44.101052    3859 log.go:172] (0xc000702420) Data frame received for 3\nI0827 19:21:44.101100    3859 log.go:172] (0xc000746000) (3) Data frame handling\nI0827 19:21:44.101116    3859 log.go:172] (0xc000746000) (3) Data frame sent\nI0827 19:21:44.101125    3859 log.go:172] (0xc000702420) Data frame received for 3\nI0827 19:21:44.101135    3859 log.go:172] (0xc000746000) (3) Data frame handling\nI0827 19:21:44.101376    3859 log.go:172] (0xc000702420) Data frame received for 5\nI0827 19:21:44.101403    3859 log.go:172] (0xc000613360) (5) Data frame handling\nI0827 19:21:44.103193    3859 log.go:172] (0xc000702420) Data frame received for 1\nI0827 19:21:44.103212    3859 log.go:172] (0xc0006132c0) (1) Data frame handling\nI0827 19:21:44.103232    3859 log.go:172] (0xc0006132c0) (1) Data frame sent\nI0827 19:21:44.103256    3859 log.go:172] (0xc000702420) (0xc0006132c0) Stream removed, broadcasting: 1\nI0827 19:21:44.103398    3859 log.go:172] (0xc000702420) Go away received\nI0827 19:21:44.103432    3859 log.go:172] (0xc000702420) (0xc0006132c0) Stream removed, broadcasting: 1\nI0827 19:21:44.103447    3859 log.go:172] (0xc000702420) (0xc000746000) Stream removed, broadcasting: 3\nI0827 19:21:44.103461    3859 log.go:172] (0xc000702420) (0xc000613360) Stream removed, broadcasting: 5\n"
Aug 27 19:21:44.114: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 27 19:21:44.114: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 27 19:21:54.145: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 27 19:22:04.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6dsvv ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 27 19:22:04.395: INFO: stderr: "I0827 19:22:04.314278    3881 log.go:172] (0xc0006dc370) (0xc0008705a0) Create stream\nI0827 19:22:04.314343    3881 log.go:172] (0xc0006dc370) (0xc0008705a0) Stream added, broadcasting: 1\nI0827 19:22:04.316963    3881 log.go:172] (0xc0006dc370) Reply frame received for 1\nI0827 19:22:04.316997    3881 log.go:172] (0xc0006dc370) (0xc000870640) Create stream\nI0827 19:22:04.317007    3881 log.go:172] (0xc0006dc370) (0xc000870640) Stream added, broadcasting: 3\nI0827 19:22:04.317885    3881 log.go:172] (0xc0006dc370) Reply frame received for 3\nI0827 19:22:04.317940    3881 log.go:172] (0xc0006dc370) (0xc000704000) Create stream\nI0827 19:22:04.317967    3881 log.go:172] (0xc0006dc370) (0xc000704000) Stream added, broadcasting: 5\nI0827 19:22:04.318682    3881 log.go:172] (0xc0006dc370) Reply frame received for 5\nI0827 19:22:04.386077    3881 log.go:172] (0xc0006dc370) Data frame received for 5\nI0827 19:22:04.386124    3881 log.go:172] (0xc0006dc370) Data frame received for 3\nI0827 19:22:04.386162    3881 log.go:172] (0xc000870640) (3) Data frame handling\nI0827 19:22:04.386179    3881 log.go:172] (0xc000870640) (3) Data frame sent\nI0827 19:22:04.386189    3881 log.go:172] (0xc0006dc370) Data frame received for 3\nI0827 19:22:04.386202    3881 log.go:172] (0xc000870640) (3) Data frame handling\nI0827 19:22:04.386233    3881 log.go:172] (0xc000704000) (5) Data frame handling\nI0827 19:22:04.387562    3881 log.go:172] (0xc0006dc370) Data frame received for 1\nI0827 19:22:04.387580    3881 log.go:172] (0xc0008705a0) (1) Data frame handling\nI0827 19:22:04.387592    3881 log.go:172] (0xc0008705a0) (1) Data frame sent\nI0827 19:22:04.387608    3881 log.go:172] (0xc0006dc370) (0xc0008705a0) Stream removed, broadcasting: 1\nI0827 19:22:04.387631    3881 log.go:172] (0xc0006dc370) Go away received\nI0827 19:22:04.387884    3881 log.go:172] (0xc0006dc370) (0xc0008705a0) Stream removed, broadcasting: 1\nI0827 19:22:04.387915    3881 log.go:172] (0xc0006dc370) (0xc000870640) Stream removed, broadcasting: 3\nI0827 19:22:04.387936    3881 log.go:172] (0xc0006dc370) (0xc000704000) Stream removed, broadcasting: 5\n"
Aug 27 19:22:04.396: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 27 19:22:04.396: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 27 19:22:14.414: INFO: Waiting for StatefulSet e2e-tests-statefulset-6dsvv/ss2 to complete update
Aug 27 19:22:14.414: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 27 19:22:14.414: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 27 19:22:14.414: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 27 19:22:24.422: INFO: Waiting for StatefulSet e2e-tests-statefulset-6dsvv/ss2 to complete update
Aug 27 19:22:24.422: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 27 19:22:24.422: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 27 19:22:34.428: INFO: Waiting for StatefulSet e2e-tests-statefulset-6dsvv/ss2 to complete update
Aug 27 19:22:34.428: INFO: Waiting for Pod e2e-tests-statefulset-6dsvv/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug 27 19:22:44.701: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6dsvv
Aug 27 19:22:45.064: INFO: Scaling statefulset ss2 to 0
Aug 27 19:23:15.416: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 19:23:15.419: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 19:23:15.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-6dsvv" for this suite.
Aug 27 19:23:23.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 19:23:23.554: INFO: namespace: e2e-tests-statefulset-6dsvv, resource: bindings, ignored listing per whitelist
Aug 27 19:23:23.587: INFO: namespace e2e-tests-statefulset-6dsvv deletion completed in 8.146166606s

• [SLOW TEST:181.581 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
Aug 27 19:23:23.587: INFO: Running AfterSuite actions on all nodes
Aug 27 19:23:23.587: INFO: Running AfterSuite actions on node 1
Aug 27 19:23:23.587: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 7264.453 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS